90 resultados para Voxels
Resumo:
In this study the distribution of intramyocellular lipids (IMCL) in human calf muscles was determined by 1H-MR spectroscopic imaging (MRSI) measurements. An obstacle for MRSI measurements in the calf, including different muscles, is the inevitable inclusion of regions with high concentrations of extramyocellular lipids (EMCL). This can lead to signal bleeding and consequently to unpredictable overlaps of IMCL resonances with EMCL in voxels of interest. The results of this study show that signal bleeding from EMCL can be substantially reduced in voxels from calf muscles by the application of a lipid extrapolation (LE) procedure (Haupt et al., Magn Reson Med 1996;35:678). The spectra of all voxels located within muscle tissue were fitted, and the metabolite values were assigned to one of 10 different muscles based on image segmentation. Significant IMCL differences between some muscles were obtained, with high values in m. soleus and two to three times lower values in the tibialis anterior, tibialis posterior, and gastrocnemius muscles. In addition to gross differences between muscles, significant intersubject differences were observed in both IMCL content and distribution over different muscles. A significant correlation between fiber orientation (obtained from orientation-dependent dipolar coupling of creatine and taurine resonances) and IMCL content was found, indicating that IMCL content is directly correlated to biomechanical properties.
Resumo:
A nonlinear viscoelastic image registration algorithm based on the demons paradigm and incorporating inverse consistent constraint (ICC) is implemented. An inverse consistent and symmetric cost function using mutual information (MI) as a similarity measure is employed. The cost function also includes regularization of transformation and inverse consistent error (ICE). The uncertainties in balancing various terms in the cost function are avoided by alternatively minimizing the similarity measure, the regularization of the transformation, and the ICE terms. The diffeomorphism of registration for preventing folding and/or tearing in the deformation is achieved by the composition scheme. The quality of image registration is first demonstrated by constructing brain atlas from 20 adult brains (age range 30-60). It is shown that with this registration technique: (1) the Jacobian determinant is positive for all voxels and (2) the average ICE is around 0.004 voxels with a maximum value below 0.1 voxels. Further, the deformation-based segmentation on Internet Brain Segmentation Repository, a publicly available dataset, has yielded high Dice similarity index (DSI) of 94.7% for the cerebellum and 74.7% for the hippocampus, attesting to the quality of our registration method.
Resumo:
Introduction: Schizophrenia patients frequently suffer from complex motor abnormalities including fine and gross motor disturbances, abnormal involuntary movements, neurological soft signs and parkinsonism. These symptoms occur early in the course of the disease, continue in chronic patients and may deteriorate with antipsychotic medication. Furthermore gesture performance is impaired in patients, including the pantomime of tool use. Whether schizophrenia patients would show difficulties of actual tool use has not yet been investigated. Human tool use is complex and relies on a network of distinct and distant brain areas. We therefore aim to test if schizophrenia patients had difficulties in tool use and to assess associations with structural brain imaging using voxel based morphometry (VBM) and tract based spatial statistics (TBSS). Methode: In total, 44 patients with schizophrenia (DSM-5 criteria; 59% men, mean age 38) underwent structural MR imaging and performed the Tool-Use test. The test examines the use of a scoop and a hammer in three conditions: pantomime (without the tool), demonstration (with the tool) and actual use (with a recipient object). T1-weighted images were processed using SPM8 and DTI-data using FSL TBSS routines. To assess structural alterations of impaired tool use we first compared gray matter (GM) volume in VBM and white matter (WM) integrity in TBSS data of patients with and without difficulties of actual tool use. Next we explored correlations of Tool use scores and VBM and TBSS data. Group comparisons were family wise error corrected for multiple tests. Correlations were uncorrected (p < 0.001) with a minimum cluster threshold of 17 voxels (equivalent to a map-wise false positive rate of alpha < 0.0001 using a Monte Carlo procedure). Results: Tool use was impaired in schizophrenia (43.2% pantomime, 11.6% demonstration, 11.6% use). Impairment was related to reduced GM volume and WM integrity. Whole brain analyses detected an effect in the SMA in group analysis. Correlations of tool use scores and brain structure revealed alterations in brain areas of the dorso-dorsal pathway (superior occipital gyrus, superior parietal lobule, and dorsal premotor area) and the ventro-dorsal pathways (middle occipital gyrus, inferior parietal lobule) the action network, as well as the insula and the left hippocampus. Furthermore, significant correlations within connecting fiber tracts - particularly alterations within the bilateral corona radiata superior and anterior as well as the corpus callosum -were associated with Tool use performance. Conclusions: Tool use performance was impaired in schizophrenia, which was associated with reduced GM volume in the action network. Our results are in line with reports of impaired tool use in patients with brain lesions particularly of the dorso-dorsal and ventro-dorsal stream of the action network. In addition an effect of tool use on WM integrity was shown within fiber tracts connecting regions important for planning and executing tool use. Furthermore, hippocampus is part of a brain system responsible for spatial memory and navigation.The results suggest that structural brain alterations in the common praxis network contribute to impaired tool use in schizophrenia.
Resumo:
PURPOSE Lymphangioleiomyomatosis (LAM) is characterized by proliferation of smooth muscle tissue that causes bronchial obstruction and secondary cystic destruction of lung parenchyma. The aim of this study was to evaluate the typical distribution of cystic defects in LAM with quantitative volumetric chest computed tomography (CT). MATERIALS AND METHODS CT examinations of 20 patients with confirmed LAM were evaluated with region-based quantification of lung parenchyma. Additionally, 10 consecutive patients were identified who had recently undergone CT imaging of the lung at our institution, in which no pathologies of the lung were found, to serve as a control group. Each lung was divided into three regions (upper, middle and lower thirds) with identical number of slices. In addition, we defined a "peel" and "core" of the lung comprising the 2 cm subpleural space and the remaining inner lung area. Computerized detection of lung volume and relative emphysema was performed with the PULMO 3D software (v3.42, Fraunhofer MEVIS, Bremen, Germany). This software package enables the quantification of emphysematous lung parenchyma by calculating the pixel index, which is defined as the ratio of lung voxels with a density <-950HU to the total number of voxels in the lung. RESULTS Cystic changes accounted for 0.1-39.1% of the total lung volume in patients with LAM. Disease manifestation in the central lung was significantly higher than in peripheral areas (peel median: 15.1%, core median: 20.5%; p=0.001). Lower thirds of lung parenchyma showed significantly less cystic changes than upper and middle lung areas combined (lower third: median 13.4, upper and middle thirds: median 19.0, p=0.001). CONCLUSION The distribution of cystic lesions in LAM is significantly more pronounced in the central lung compared to peripheral areas. There is a significant predominance of cystic changes in apical and intermediate lung zones compared to the lung bases.
Resumo:
Arterial spin labeling (ASL) is a technique for noninvasively measuring cerebral perfusion using magnetic resonance imaging. Clinical applications of ASL include functional activation studies, evaluation of the effect of pharmaceuticals on perfusion, and assessment of cerebrovascular disease, stroke, and brain tumor. The use of ASL in the clinic has been limited by poor image quality when large anatomic coverage is required and the time required for data acquisition and processing. This research sought to address these difficulties by optimizing the ASL acquisition and processing schemes. To improve data acquisition, optimal acquisition parameters were determined through simulations, phantom studies and in vivo measurements. The scan time for ASL data acquisition was limited to fifteen minutes to reduce potential subject motion. A processing scheme was implemented that rapidly produced regional cerebral blood flow (rCBF) maps with minimal user input. To provide a measure of the precision of the rCBF values produced by ASL, bootstrap analysis was performed on a representative data set. The bootstrap analysis of single gray and white matter voxels yielded a coefficient of variation of 6.7% and 29% respectively, implying that the calculated rCBF value is far more precise for gray matter than white matter. Additionally, bootstrap analysis was performed to investigate the sensitivity of the rCBF data to the input parameters and provide a quantitative comparison of several existing perfusion models. This study guided the selection of the optimum perfusion quantification model for further experiments. The optimized ASL acquisition and processing schemes were evaluated with two ASL acquisitions on each of five normal subjects. The gray-to-white matter rCBF ratios for nine of the ten acquisitions were within ±10% of 2.6 and none were statistically different from 2.6, the typical ratio produced by a variety of quantitative perfusion techniques. Overall, this work produced an ASL data acquisition and processing technique for quantitative perfusion and functional activation studies, while revealing the limitations of the technique through bootstrap analysis. ^
Resumo:
Radiation therapy for patients with intact cervical cancer is frequently delivered using primary external beam radiation therapy (EBRT) followed by two fractions of intracavitary brachytherapy (ICBT). Although the tumor is the primary radiation target, controlling microscopic disease in the lymph nodes is just as critical to patient treatment outcome. In patients where gross lymphadenopathy is discovered, an extra EBRT boost course is delivered between the two ICBT fractions. Since the nodal boost is an addendum to primary EBRT and ICBT, the prescription and delivery must be performed considering previously delivered dose. This project aims to address the major issues of this complex process for the purpose of improving treatment accuracy while increasing dose sparing to the surrounding normal tissues. Because external beam boosts to involved lymph nodes are given prior to the completion of ICBT, assumptions must be made about dose to positive lymph nodes from future implants. The first aim of this project was to quantify differences in nodal dose contribution between independent ICBT fractions. We retrospectively evaluated differences in the ICBT dose contribution to positive pelvic nodes for ten patients who had previously received external beam nodal boost. Our results indicate that the mean dose to the pelvic nodes differed by up to 1.9 Gy between independent ICBT fractions. The second aim is to develop and validate a volumetric method for summing dose of the normal tissues during prescription of nodal boost. The traditional method of dose summation uses the maximum point dose from each modality, which often only represents the worst case scenario. However, the worst case is often an exaggeration when highly conformal therapy methods such as intensity modulated radiation therapy (IMRT) are used. We used deformable image registration algorithms to volumetrically sum dose for the bladder and rectum and created a voxel-by-voxel validation method. The mean error in deformable image registration results of all voxels within the bladder and rectum were 5 and 6 mm, respectively. Finally, the third aim explored the potential use of proton therapy to reduce normal tissue dose. A major physical advantage of protons over photons is that protons stop after delivering dose in the tumor. Although theoretically superior to photons, proton beams are more sensitive to uncertainties caused by interfractional anatomical variations, and must be accounted for during treatment planning to ensure complete target coverage. We have demonstrated a systematic approach to determine population-based anatomical margin requirements for proton therapy. The observed optimal treatment angles for common iliac nodes were 90° (left lateral) and 180° (posterior-anterior [PA]) with additional 0.8 cm and 0.9 cm margins, respectively. For external iliac nodes, lateral and PA beams required additional 0.4 cm and 0.9 cm margins, respectively. Through this project, we have provided radiation oncologists with additional information about potential differences in nodal dose between independent ICBT insertions and volumetric total dose distribution in the bladder and rectum. We have also determined the margins needed for safe delivery of proton therapy when delivering nodal boosts to patients with cervical cancer.
Resumo:
The first data set contains the mean and cofficient of variation (standard deviation divided by mean) of a multi-frequency indicator I derived from ER60 acoustic information collected at five frequencies (18, 38, 70, 120, and 200 kHz) in the Bay of Biscay in May of the years 2006, 2008, 2009 and 2010 (Pelgas surveys). The multi-frequency indicator was first calculated per voxel (20 m long × 5 m deep sampling unit) and then averaged on a spatial grid (approx. 20 nm × 20 nm) for five 5-m depth layers in the surface waters (10-15m, 15-20m, 20-25m, 25-30m below sea surface); there are missing values in particular in the shallowest layer. The second data set provides for each grid cell and depth layer the proportion of voxels for which the multi-frequency indicator I was indicative of a certain group of organisms. For this the following interpretation was used: I < 0.39 swim bladder fish or large gas bubbles, I = 0.39-0.58 small resonant bubbles present in gas bearing organisms such as larval fish and phytoplankton, I = 0.7-0.8 fluidlike zooplankton such as copepods and euphausiids, and I > 0.8 mackerel. These proportions can be interpreted as a relative abundance index for each of the four organism groups.
Resumo:
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
Resumo:
We used event-related functional MRI to investigate the neural bases of two categories of mental processes believed to contribute to performance of an alphabetization working memory task: memory storage and memory manipulation. Our delayed-response tasks required memory for the identity and position-in-the-display of items in two- or five-letter memory sets (to identify load-sensitive regions) or memory for the identity and relative position-in-the-alphabet of items in five-letter memory sets (to identify manipulation-sensitive regions). Results revealed voxels in the left perisylvian cortex of five of five subjects showing load sensitivity (as contrasted with alphabetization-sensitive voxels in this region in only one subject) and voxels of dorsolateral prefrontal cortex in all subjects showing alphabetization sensitivity (as contrasted with load-sensitive voxels in this region in two subjects). This double dissociation was reliable at the group level. These data are consistent with the hypothesis that the nonmnemonic executive control processes that can contribute to working memory function are primarily prefrontal cortex-mediated whereas mnemonic processes necessary for working memory storage are primarily posteriorly mediated. More broadly, they support the view that working memory is a faculty that arises from the coordinated interaction of computationally and neuroanatomically dissociable processes.
Resumo:
Tese de doutoramento, Engenharia Biomédica e Biofísica, Universidade de Lisboa, Faculdade de Ciências, 2016
Resumo:
Tese de mestrado integrado em Engenharia Biomédica e Biofísica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2016
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.
Resumo:
Background: Proton Magnetic Resonance Spectroscopy (H-MRS) is a non-invasive imaging technique that enables quantification of neurochemistry in vivo and thereby facilitates investigation of the biochemical underpinnings of human cognitive variability. Studies in the field of cognitive spectroscopy have commonly focused on relationships between measures of N-acetyl aspartate (NAA), a surrogate marker of neuronal health and function, and broad measures of cognitive performance, such as IQ. Methodology/Principal Findings: In this study, we used H-MRS to interrogate single-voxels in occipitoparietal and frontal cortex, in parallel with assessments of psychometric intelligence, in a sample of 40 healthy adult participants. We found correlations between NAA and IQ that were within the range reported in previous studies. However, the magnitude of these effects was significantly modulated by the stringency of data screening and the extent to which outlying values contributed to statistical analyses. Conclusions/Significance: H-MRS offers a sensitive tool for assessing neurochemistry non-invasively, yet the relationships between brain metabolites and broad aspects of human behavior such as IQ are subtle. We highlight the need to develop an increasingly rigorous analytical and interpretive framework for collecting and reporting data obtained from cognitive spectroscopy studies of this kind. © 2014 Patel, Blyth, Griffiths, Kelly and Talcott.
Resumo:
This paper presents a novel algorithm for medial surfaces extraction that is based on the density-corrected Hamiltonian analysis of Torsello and Hancock [1]. In order to cope with the exponential growth of the number of voxels, we compute a first coarse discretization of the mesh which is iteratively refined until a desired resolution is achieved. The refinement criterion relies on the analysis of the momentum field, where only the voxels with a suitable value of the divergence are exploded to a lower level of the hierarchy. In order to compensate for the discretization errors incurred at the coarser levels, a dilation procedure is added at the end of each iteration. Finally we design a simple alignment procedure to correct the displacement of the extracted skeleton with respect to the true underlying medial surface. We evaluate the proposed approach with an extensive series of qualitative and quantitative experiments. © 2013 Elsevier Inc. All rights reserved.