964 resultados para Volumetric MRI
Resumo:
Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n = 30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org.
Resumo:
Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.
Resumo:
Background The quality of the early environment is hypothesized to be an influence on morphological development in key neural areas related to affective responding, but direct evidence to support this possibility is limited. In a 22-year longitudinal study, we examined hippocampal and amygdala volumes in adulthood in relation to early infant attachment status, an important indicator of the quality of the early caregiving environment. Methods Participants (N = 59) were derived from a prospective longitudinal study of the impact of maternal postnatal depression on child development. Infant attachment status (24 Secure; 35 Insecure) was observed at 18 months of age, and MRI assessments were completed at 22 years. Results In line with hypotheses, insecure versus secure infant attachment status was associated with larger amygdala volumes in young adults, an effect that was not accounted for by maternal depression history. We did not find early infant attachment status to predict hippocampal volumes. Conclusions Common variations in the quality of early environment are associated with gross alterations in amygdala morphology in the adult brain. Further research is required to establish the neural changes that underpin the volumetric differences reported here, and any functional implications.
An LDA and probability-based classifier for the diagnosis of Alzheimer's Disease from structural MRI
Resumo:
In this paper a custom classification algorithm based on linear discriminant analysis and probability-based weights is implemented and applied to the hippocampus measurements of structural magnetic resonance images from healthy subjects and Alzheimer’s Disease sufferers; and then attempts to diagnose them as accurately as possible. The classifier works by classifying each measurement of a hippocampal volume as healthy controlsized or Alzheimer’s Disease-sized, these new features are then weighted and used to classify the subject as a healthy control or suffering from Alzheimer’s Disease. The preliminary results obtained reach an accuracy of 85.8% and this is a similar accuracy to state-of-the-art methods such as a Naive Bayes classifier and a Support Vector Machine. An advantage of the method proposed in this paper over the aforementioned state of the art classifiers is the descriptive ability of the classifications it produces. The descriptive model can be of great help to aid a doctor in the diagnosis of Alzheimer’s Disease, or even further the understand of how Alzheimer’s Disease affects the hippocampus.
Resumo:
This work investigates the problem of feature selection in neuroimaging features from structural MRI brain images for the classification of subjects as healthy controls, suffering from Mild Cognitive Impairment or Alzheimer’s Disease. A Genetic Algorithm wrapper method for feature selection is adopted in conjunction with a Support Vector Machine classifier. In very large feature sets, feature selection is found to be redundant as the accuracy is often worsened when compared to an Support Vector Machine with no feature selection. However, when just the hippocampal subfields are used, feature selection shows a significant improvement of the classification accuracy. Three-class Support Vector Machines and two-class Support Vector Machines combined with weighted voting are also compared with the former and found more useful. The highest accuracy achieved at classifying the test data was 65.5% using a genetic algorithm for feature selection with a three-class Support Vector Machine classifier.
Resumo:
Non-linear methods for estimating variability in time-series are currently of widespread use. Among such methods are approximate entropy (ApEn) and sample approximate entropy (SampEn). The applicability of ApEn and SampEn in analyzing data is evident and their use is increasing. However, consistency is a point of concern in these tools, i.e., the classification of the temporal organization of a data set might indicate a relative less ordered series in relation to another when the opposite is true. As highlighted by their proponents themselves, ApEn and SampEn might present incorrect results due to this lack of consistency. In this study, we present a method which gains consistency by using ApEn repeatedly in a wide range of combinations of window lengths and matching error tolerance. The tool is called volumetric approximate entropy, vApEn. We analyze nine artificially generated prototypical time-series with different degrees of temporal order (combinations of sine waves, logistic maps with different control parameter values, random noises). While ApEn/SampEn clearly fail to consistently identify the temporal order of the sequences, vApEn correctly do. In order to validate the tool we performed shuffled and surrogate data analysis. Statistical analysis confirmed the consistency of the method. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The quality control optimization of medical processes that use ionizing radiation in the treatment of diseases like cancer is a key element for patient safety and success of treatment. The major medical application of radiation is radiotherapy, i.e. the delivery of dose levels to well-defined target tissues of a patient with the purpose of eliminating a disease. The need of an accurate tumour-edge definition with the purpose of preserving healthy surrounding tissue demands rigorous radiation treatment planning. Dosimetric methods are used for dose distribution mapping region of interest to assure that the prescribed dose and the irradiated region are correct. The Fricke gel (FXG) is the main dosimeter that supplies visualization of the three-dimensional (3D) dose distribution. In this work the dosimetric characteristics of the modified Fricke dosimeter produced at the Radiation Metrology Centre of the Institute of Energetic and Nuclear Research (IPEN) such as gel concentration dose response dependence, xylenol orange addition influence, dose response between 5 and 50Gy and signal stability were evaluated by magnetic resonance imaging (MRI). Using the same gel solution, breast simulators (phantoms) were shaped and absorbed dose distributions were imaged by MRI at the Nuclear Resonance Laboratory of the Physics Institute of Sao Paulo University. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Texture is an important visual attribute used to describe the pixel organization in an image. As well as it being easily identified by humans, its analysis process demands a high level of sophistication and computer complexity. This paper presents a novel approach for texture analysis, based on analyzing the complexity of the surface generated from a texture, in order to describe and characterize it. The proposed method produces a texture signature which is able to efficiently characterize different texture classes. The paper also illustrates a novel method performance on an experiment using texture images of leaves. Leaf identification is a difficult and complex task due to the nature of plants, which presents a huge pattern variation. The high classification rate yielded shows the potential of the method, improving on traditional texture techniques, such as Gabor filters and Fourier analysis.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The visualization of three-dimensional(3D)images is increasigly being sed in the area of medicine, helping physicians diagnose desease. the advances achived in scaners esed for acquisition of these 3d exames, such as computerized tumography(CT) and Magnetic Resonance imaging (MRI), enable the generation of images with higher resolutions, thus, generating files with much larger sizes. Currently, the images of computationally expensive one, and demanding the use of a righ and computer for such task. The direct remote acess of these images thruogh the internet is not efficient also, since all images have to be trasferred to the user´s equipment before the 3D visualization process ca start. with these problems in mind, this work proposes and analyses a solution for the remote redering of 3D medical images, called Remote Rendering (RR3D). In RR3D, the whole hedering process is pefomed a server or a cluster of servers, with high computational power, and only the resulting image is tranferred to the client, still allowing the client to peform operations such as rotations, zoom, etc. the solution was developed using web services written in java and an architecture that uses the scientific visualization packcage paraview, the framework paraviewWeb and the PACS server DCM4CHEE.The solution was tested with two scenarios where the rendering process was performed by a sever with graphics hadwere (GPU) and by a server without GPUs. In the scenarios without GPUs, the soluction was executed in parallel with several number of cores (processing units)dedicated to it. In order to compare our solution to order medical visualization application, a third scenario was esed in the rendering process, was done locally. In all tree scenarios, the solution was tested for different network speeds. The solution solved satisfactorily the problem with the delay in the transfer of the DICOM files, while alowing the use of low and computers as client for visualizing the exams even, tablets and smart phones
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Objective. To evaluate the content of inorganic particles and the flexural strength of new condensable composites for posterior teeth in comparison to hybrid conventional composites.Method. The determination of the content of inorganic particles was performed by mass weighing of a polymerized composite before and after the elimination of the organic phase. The volumetric particle content was determined by a practical method based on Archimedes' principle, which calculates the volume of the composite and their particles by differential mass measured in the air and in water. The flexural. strength of three points was evaluated according to the norm ISO 4049:1988.Results. The results showed the following filter content: Alert, 67.26%; Z-100, 65.27%; Filtek P 60, 62.34%; Ariston pHc, 64.07%; Tetric Ceram, 57.22%; Definite, 54.42%; Solitaire, 47.76%. In the flexural strength test, the materials presented the following decreasing order of resistance: Filtek P 60 (170.02 MPa) > Z-100 (151.34 MPa) > Tetric Ceram (126.14 MPa) = Alert (124.89 MPa) > Ariston pHc (102.00 MPa) = Definite (93.63 MPa) > Solitaire (56.71 MPa).Conclusion. New condensable composites for posterior teeth present a concentration of inorganic particles similar to those of hybrid composites but do not necessarily present higher flexural strength. (C) 2003 Elsevier B.V. Ltd. Alt rights reserved.
Resumo:
Objective: The aim of the present study was to describe the clinical and MRI findings of the temporomandibular joint (TMJ) in patients with major depressive disorders (MDDs) of the non-psychotic type.Methods: 40 patients (80 TMJs) who were diagnosed as having MDDs were selected for this study. The clinical examination of the TMJs was conducted according to the research diagnostic criteria and temporomandibular disorders (TMDs). The MRIs were obtained bilaterally in each patient with axial, parasagittal and paracoronal sections within a real-time dynamic sequence. Two trained oral radiologists assessed all images. For statistical analyses, Fisher's exact test and chi(2) test were applied (alpha = 0.05).Results: Migraine was reported in 52.5% of subjects. Considering disc position, statistically significant differences between opening patterns with and without alteration (p = 0.00) and between present and absent joint noises (p = 0.00) were found. Regarding muscular pain, patients with and without abnormalities in disc function and patients with and without abnormalities in disc position were not statistically significant (p = 0.42 and p = 0.40, respectively). Significant differences between mandibular pathway with and without abnormalities (p=0.00) and between present and absent joint noises (p=0.00) were observed.Conclusion: Based on the preliminary results observed by clinical and MRI examination of the TMJ, no direct relationship could be determined between MDDs and TMDs. Dentomaxillofacial Radiology (2012) 41, 316-322. doi: 10.1259/dmfr/27328352