994 resultados para Multimodal image registration
Resumo:
Information theory-based metric such as mutual information (MI) is widely used as similarity measurement for multimodal registration. Nevertheless, this metric may lead to matching ambiguity for non-rigid registration. Moreover, maximization of MI alone does not necessarily produce an optimal solution. In this paper, we propose a segmentation-assisted similarity metric based on point-wise mutual information (PMI). This similarity metric, termed SPMI, enhances the registration accuracy by considering tissue classification probabilities as prior information, which is generated from an expectation maximization (EM) algorithm. Diffeomorphic demons is then adopted as the registration model and is optimized in a hierarchical framework (H-SPMI) based on different levels of anatomical structure as prior knowledge. The proposed method is evaluated using Brainweb synthetic data and clinical fMRI images. Both qualitative and quantitative assessment were performed as well as a sensitivity analysis to the segmentation error. Compared to the pure intensity-based approaches which only maximize mutual information, we show that the proposed algorithm provides significantly better accuracy on both synthetic and clinical data.
Resumo:
The work presented in this thesis is divided into two distinct sections. In the first, the functional neuroimaging technique of Magnetoencephalography (MEG) is described and a new technique is introduced for accurate combination of MEG and MRI co-ordinate systems. In the second part of this thesis, MEG and the analysis technique of SAM are used to investigate responses of the visual system in the context of functional specialisation within the visual cortex. In chapter one, the sources of MEG signals are described, followed by a brief description of the necessary instrumentation for accurate MEG recordings. This chapter is concluded by introducing the forward and inverse problems of MEG, techniques to solve the inverse problem, and a comparison of MEG with other neuroimaging techniques. Chapter two provides an important contribution to the field of research with MEG. Firstly, it is described how MEG and MRI co-ordinate systems are combined for localisation and visualisation of activated brain regions. A previously used co-registration methods is then described, and a new technique is introduced. In a series of experiments, it is demonstrated that using fixed fiducial points provides a considerable improvement in the accuracy and reliability of co-registration. Chapter three introduces the visual system starting from the retina and ending with the higher visual rates. The functions of the magnocellular and the parvocellular pathways are described and it is shown how the parallel visual pathways remain segregated throughout the visual system. The structural and functional organisation of the visual cortex is then described. Chapter four presents strong evidence in favour of the link between conscious experience and synchronised brain activity. The spatiotemporal responses of the visual cortex are measured in response to specific gratings. It is shown that stimuli that induce visual discomfort and visual illusions share their physical properties with those that induce highly synchronised gamma frequency oscillations in the primary visual cortex. Finally chapter five is concerned with localization of colour in the visual cortex. In this first ever use of Synthetic Aperture Magnetometry to investigate colour processing in the visual cortex, it is shown that in response to isoluminant chromatic gratings, the highest magnitude of cortical activity arise from area V2.
Resumo:
This work discusses the determination of the breathing patterns in time sequence of images obtained from magnetic resonance (MR) and their use in the temporal registration of coronal and sagittal images. The registration is made without the use of any triggering information and any special gas to enhance the contrast. The temporal sequences of images are acquired in free breathing. The real movement of the lung has never been seen directly, as it is totally dependent on its surrounding muscles and collapses without them. The visualization of the lung in motion is an actual topic of research in medicine. The lung movement is not periodic and it is susceptible to variations in the degree of respiration. Compared to computerized tomography (CT), MR imaging involves longer acquisition times and it is preferable because it does not involve radiation. As coronal and sagittal sequences of images are orthogonal to each other, their intersection corresponds to a segment in the three-dimensional space. The registration is based on the analysis of this intersection segment. A time sequence of this intersection segment can be stacked, defining a two-dimension spatio-temporal (2DST) image. The algorithm proposed in this work can detect asynchronous movements of the internal lung structures and lung surrounding organs. It is assumed that the diaphragmatic movement is the principal movement and all the lung structures move almost synchronously. The synchronization is performed through a pattern named respiratory function. This pattern is obtained by processing a 2DST image. An interval Hough transform algorithm searches for synchronized movements with the respiratory function. A greedy active contour algorithm adjusts small discrepancies originated by asynchronous movements in the respiratory patterns. The output is a set of respiratory patterns. Finally, the composition of coronal and sagittal image pairs that are in the same breathing phase is realized by comparing of respiratory patterns originated from diaphragmatic and upper boundary surfaces. When available, the respiratory patterns associated to lung internal structures are also used. The results of the proposed method are compared with the pixel-by-pixel comparison method. The proposed method increases the number of registered pairs representing composed images and allows an easy check of the breathing phase. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant’s pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant’s pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant’s main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant’s pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67±34μm and 108μm, and angular misfits of 0.15±0.08º and 1.4º, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants’ pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
Resumo:
Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized crosscorrelation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates di erent temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1:6 1:9% and 4:0 4:2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial Para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Detecting changes between images of the same scene taken at different times is of great interest for monitoring and understanding the environment. It is widely used for on-land application but suffers from different constraints. Unfortunately, Change detection algorithms require highly accurate geometric and photometric registration. This requirement has precluded their use in underwater imagery in the past. In this paper, the change detection techniques available nowadays for on-land application were analyzed and a method to automatically detect the changes in sequences of underwater images is proposed. Target application scenarios are habitat restoration sites, or area monitoring after sudden impacts from hurricanes or ship groundings. The method is based on the creation of a 3D terrain model from one image sequence over an area of interest. This model allows for synthesizing textured views that correspond to the same viewpoints of a second image sequence. The generated views are photometrically matched and corrected against the corresponding frames from the second sequence. Standard change detection techniques are then applied to find areas of difference. Additionally, the paper shows that it is possible to detect false positives, resulting from non-rigid objects, by applying the same change detection method to the first sequence exclusively. The developed method was able to correctly find the changes between two challenging sequences of images from a coral reef taken one year apart and acquired with two different cameras
Resumo:
In this paper, we propose a new paradigm to carry outthe registration task with a dense deformation fieldderived from the optical flow model and the activecontour method. The proposed framework merges differenttasks such as segmentation, regularization, incorporationof prior knowledge and registration into a singleframework. The active contour model is at the core of ourframework even if it is used in a different way than thestandard approaches. Indeed, active contours are awell-known technique for image segmentation. Thistechnique consists in finding the curve which minimizesan energy functional designed to be minimal when thecurve has reached the object contours. That way, we getaccurate and smooth segmentation results. So far, theactive contour model has been used to segment objectslying in images from boundary-based, region-based orshape-based information. Our registration technique willprofit of all these families of active contours todetermine a dense deformation field defined on the wholeimage. A well-suited application of our model is theatlas registration in medical imaging which consists inautomatically delineating anatomical structures. Wepresent results on 2D synthetic images to show theperformances of our non rigid deformation field based ona natural registration term. We also present registrationresults on real 3D medical data with a large spaceoccupying tumor substantially deforming surroundingstructures, which constitutes a high challenging problem.
Resumo:
Ophthalmologists typically acquire different image modalities to diagnose eye pathologies. They comprise, e.g., Fundus photography, optical coherence tomography, computed tomography, and magnetic resonance imaging (MRI). Yet, these images are often complementary and do express the same pathologies in a different way. Some pathologies are only visible in a particular modality. Thus, it is beneficial for the ophthalmologist to have these modalities fused into a single patient-specific model. The goal of this paper is a fusion of Fundus photography with segmented MRI volumes. This adds information to MRI that was not visible before like vessels and the macula. This paper contributions include automatic detection of the optic disc, the fovea, the optic axis, and an automatic segmentation of the vitreous humor of the eye.
Resumo:
In the last five years, Deep Brain Stimulation (DBS) has become the most popular and effective surgical technique for the treatent of Parkinson's disease (PD). The Subthalamic Nucleus (STN) is the usual target involved when applying DBS. Unfortunately, the STN is in general not visible in common medical imaging modalities. Therefore, atlas-based segmentation is commonly considered to locate it in the images. In this paper, we propose a scheme that allows both, to perform a comparison between different registration algorithms and to evaluate their ability to locate the STN automatically. Using this scheme we can evaluate the expert variability against the error of the algorithms and we demonstrate that automatic STN location is possible and as accurate as the methods currently used.
Resumo:
In this paper we present a new method to track bonemovements in stereoscopic X-ray image series of the kneejoint. The method is based on two different X-ray imagesets: a rotational series of acquisitions of the stillsubject knee that will allow the tomographicreconstruction of the three-dimensional volume (model),and a stereoscopic image series of orthogonal projectionsas the subject performs movements. Tracking the movementsof bones throughout the stereoscopic image series meansto determine, for each frame, the best pose of everymoving element (bone) previously identified in the 3Dreconstructed model. The quality of a pose is reflectedin the similarity between its simulated projections andthe actual radiographs. We use direct Fourierreconstruction to approximate the three-dimensionalvolume of the knee joint. Then, to avoid the expensivecomputation of digitally rendered radiographs (DRR) forpose recovery, we reformulate the tracking problem in theFourier domain. Under the hypothesis of parallel X-raybeams, we use the central-slice-projection theorem toreplace the heavy 2D-to-3D registration of projections inthe signal domain by efficient slice-to-volumeregistration in the Fourier domain. Focusing onrotational movements, the translation-relevant phaseinformation can be discarded and we only consider scalarFourier amplitudes. The core of our motion trackingalgorithm can be implemented as a classical frame-wiseslice-to-volume registration task. Preliminary results onboth synthetic and real images confirm the validity ofour approach.
Resumo:
Résumé: Les récents progrès techniques de l'imagerie cérébrale non invasives ont permis d'améliorer la compréhension des différents systèmes fonctionnels cérébraux. Les approches multimodales sont devenues indispensables en recherche, afin d'étudier dans sa globalité les différentes caractéristiques de l'activité neuronale qui sont à la base du fonctionnement cérébral. Dans cette étude combinée d'imagerie par résonance magnétique fonctionnelle (IRMf) et d'électroencéphalographie (EEG), nous avons exploité le potentiel de chacune d'elles, soit respectivement la résolution spatiale et temporelle élevée. Les processus cognitifs, de perception et de mouvement nécessitent le recrutement d'ensembles neuronaux. Dans la première partie de cette thèse nous étudions, grâce à la combinaison des techniques IRMf et EEG, la réponse des aires visuelles lors d'une stimulation qui demande le regroupement d'éléments cohérents appartenant aux deux hémi-champs visuels pour en faire une seule image. Nous utilisons une mesure de synchronisation (EEG de cohérence) comme quantification de l'intégration spatiale inter-hémisphérique et la réponse BOLD (Blood Oxygenation Level Dependent) pour évaluer l'activité cérébrale qui en résulte. L'augmentation de la cohérence de l'EEG dans la bande beta-gamma mesurée au niveau des électrodes occipitales et sa corrélation linéaire avec la réponse BOLD dans les aires de VP/V4, reflète et visualise un ensemble neuronal synchronisé qui est vraisemblablement impliqué dans le regroupement spatial visuel. Ces résultats nous ont permis d'étendre la recherche à l'étude de l'impact que le contenu en fréquence des stimuli a sur la synchronisation. Avec la même approche, nous avons donc identifié les réseaux qui montrent une sensibilité différente à l'intégration des caractéristiques globales ou détaillées des images. En particulier, les données montrent que l'implication des réseaux visuels ventral et dorsal est modulée par le contenu en fréquence des stimuli. Dans la deuxième partie nous avons a testé l'hypothèse que l'augmentation de l'activité cérébrale pendant le processus de regroupement inter-hémisphérique dépend de l'activité des axones calleux qui relient les aires visuelles. Comme le Corps Calleux présente une maturation progressive pendant les deux premières décennies, nous avons analysé le développement de la fonction d'intégration spatiale chez des enfants âgés de 7 à 13 ans et le rôle de la myelinisation des fibres calleuses dans la maturation de l'activité visuelle. Nous avons combiné l'IRMf et la technique de MTI (Magnetization Transfer Imaging) afin de suivre les signes de maturation cérébrale respectivement sous l'aspect fonctionnel et morphologique (myelinisation). Chez lés enfants, les activations associées au processus d'intégration entre les hémi-champs visuels sont, comme chez l'adulte, localisées dans le réseau ventral mais se limitent à une zone plus restreinte. La forte corrélation que le signal BOLD montre avec la myelinisation des fibres du splenium est le signe de la dépendance entre la maturation des fonctions visuelles de haut niveau et celle des connections cortico-corticales. Abstract: Recent advances in non-invasive brain imaging allow the visualization of the different aspects of complex brain dynamics. The approaches based on a combination of imaging techniques facilitate the investigation and the link of multiple aspects of information processing. They are getting a leading tool for understanding the neural basis of various brain functions. Perception, motion, and cognition involve the formation of cooperative neuronal assemblies distributed over the cerebral cortex. In this research, we explore the characteristics of interhemispheric assemblies in the visual brain by taking advantage of the complementary characteristics provided by EEG (electroencephalography) and fMRI (Functional Magnetic Resonance Imaging) techniques. These are the high temporal resolution for EEG and high spatial resolution for fMRI. In the first part of this thesis we investigate the response of the visual areas to the interhemispheric perceptual grouping task. We use EEG coherence as a measure of synchronization and BOLD (Blood Oxygenar tion Level Dependent) response as a measure of the related brain activation. The increase of the interhemispheric EEG coherence restricted to the occipital electrodes and to the EEG beta band and its linear relation to the BOLD responses in VP/V4 area points to a trans-hemispheric synchronous neuronal assembly involved in early perceptual grouping. This result encouraged us to explore the formation of synchronous trans-hemispheric networks induced by the stimuli of various spatial frequencies with this multimodal approach. We have found the involvement of ventral and medio-dorsal visual networks modulated by the spatial frequency content of the stimulus. Thus, based on the combination of EEG coherence and fMRI BOLD data, we have identified visual networks with different sensitivity to integrating low vs. high spatial frequencies. In the second part of this work we test the hypothesis that the increase of brain activity during perceptual grouping depends on the activity of callosal axons interconnecting the visual areas that are involved. To this end, in children of 7-13 years, we investigated functional (functional activation with fMRI) and morphological (myelination of the corpus callosum with Magnetization Transfer Imaging (MTI)) aspects of spatial integration. In children, the activation associated with the spatial integration across visual fields was localized in visual ventral stream and limited to a part of the area activated in adults. The strong correlation between individual BOLD responses in .this area and the myelination of the splenial system of fibers points to myelination as a significant factor in the development of the spatial integration ability.
Resumo:
This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.
Resumo:
Ophthalmologists typically acquire different image modalities to diagnose eye pathologies. They comprise, e.g., Fundus photography, optical coherence tomography, computed tomography, and magnetic resonance imaging (MRI). Yet, these images are often complementary and do express the same pathologies in a different way. Some pathologies are only visible in a particular modality. Thus, it is beneficial for the ophthalmologist to have these modalities fused into a single patient-specific model. The goal of this paper is a fusion of Fundus photography with segmented MRI volumes. This adds information to MRI that was not visible before like vessels and the macula. This paper contributions include automatic detection of the optic disc, the fovea, the optic axis, and an automatic segmentation of the vitreous humor of the eye.