47 resultados para Brain image classification
Resumo:
Water flow and solute transport through soils are strongly influenced by the spatial arrangement of soil materials with different hydraulic and chemical properties. Knowing the specific or statistical arrangement of these materials is considered as a key toward improved predictions of solute transport. Our aim was to obtain two-dimensional material maps from photographs of exposed profiles. We developed a segmentation and classification procedure and applied it to the images of a very heterogeneous sand tank, which was used for a series of flow and transport experiments. The segmentation was based on thresholds of soil color, estimated from local median gray values, and of soil texture, estimated from local coefficients of variation of gray values. Important steps were the correction of inhomogeneous illumination and reflection, and the incorporation of prior knowledge in filters used to extract the image features and to smooth the results morphologically. We could check and confirm the success of our mapping by comparing the estimated with the designed sand distribution in the tank. The resulting material map was used later as input to model flow and transport through the sand tank. Similar segmentation procedures may be applied to any high-density raster data, including photographs or spectral scans of field profiles.
Resumo:
We present a fully automatic segmentation method for multi-modal brain tumor segmentation. The proposed generative-discriminative hybrid model generates initial tissue probabilities, which are used subsequently for enhancing the classi�cation and spatial regularization. The model has been evaluated on the BRATS2013 training set, which includes multimodal MRI images from patients with high- and low-grade gliomas. Our method is capable of segmenting the image into healthy (GM, WM, CSF) and pathological tissue (necrotic, enhancing and non-enhancing tumor, edema). We achieved state-of-the-art performance (Dice mean values of 0.69 and 0.8 for tumor subcompartments and complete tumor respectively) within a reasonable timeframe (4 to 15 minutes).
Resumo:
Multiple sclerosis (MS) is a chronic disease with an inflammatory and neurodegenerative pathology. Axonal loss and neurodegeneration occurs early in the disease course and may lead to irreversible neurological impairment. Changes in brain volume, observed from the earliest stage of MS and proceeding throughout the disease course, may be an accurate measure of neurodegeneration and tissue damage. There are a number of magnetic resonance imaging-based methods for determining global or regional brain volume, including cross-sectional (e.g. brain parenchymal fraction) and longitudinal techniques (e.g. SIENA [Structural Image Evaluation using Normalization of Atrophy]). Although these methods are sensitive and reproducible, caution must be exercised when interpreting brain volume data, as numerous factors (e.g. pseudoatrophy) may have a confounding effect on measurements, especially in a disease with complex pathological substrates such as MS. Brain volume loss has been correlated with disability progression and cognitive impairment in MS, with the loss of grey matter volume more closely correlated with clinical measures than loss of white matter volume. Preventing brain volume loss may therefore have important clinical implications affecting treatment decisions, with several clinical trials now demonstrating an effect of disease-modifying treatments (DMTs) on reducing brain volume loss. In clinical practice, it may therefore be important to consider the potential impact of a therapy on reducing the rate of brain volume loss. This article reviews the measurement of brain volume in clinical trials and practice, the effect of DMTs on brain volume change across trials and the clinical relevance of brain volume loss in MS.
Resumo:
In diagnostic neuroradiology as well as in radiation oncology and neurosurgery, there is an increasing demand for accurate segmentation of tumor-bearing brain images. Atlas-based segmentation is an appealing automatic technique thanks to its robustness and versatility. However, atlas-based segmentation of tumor-bearing brain images is challenging due to the confounding effects of the tumor in the patient image. In this article, we provide a brief background on brain tumor imaging and introduce the clinical perspective, before we categorize and review the state of the art in the current literature on atlas-based segmentation for tumor-bearing brain images. We also present selected methods and results from our own research in more detail. Finally, we conclude with a short summary and look at new developments in the field, including requirements for future routine clinical use.
Resumo:
BACKGROUND: Accurate projection of implanted subdural electrode contacts in presurgical evaluation of pharmacoresistant epilepsy cases by invasive EEG is highly relevant. Linear fusion of CT and MRI images may display the contacts in the wrong position due to brain shift effects. OBJECTIVE: A retrospective study in five patients with pharmacoresistant epilepsy was performed to evaluate whether an elastic image fusion algorithm can provide a more accurate projection of the electrode contacts on the pre-implantation MRI as compared to linear fusion. METHODS: An automated elastic image fusion algorithm (AEF), a guided elastic image fusion algorithm (GEF), and a standard linear fusion algorithm (LF) were used on preoperative MRI and post-implantation CT scans. Vertical correction of virtual contact positions, total virtual contact shift, corrections of midline shift and brain shifts due to pneumencephalus were measured. RESULTS: Both AEF and GEF worked well with all 5 cases. An average midline shift of 1.7mm (SD 1.25) was corrected to 0.4mm (SD 0.8) after AEF and to 0.0mm (SD 0) after GEF. Median virtual distances between contacts and cortical surface were corrected by a significant amount, from 2.3mm after LF to 0.0mm after AEF and GEF (p<.001). Mean total relative corrections of 3.1 mm (SD 1.85) after AEF and 3.0mm (SD 1.77) after GEF were achieved. The tested version of GEF did not achieve a satisfying virtual correction of pneumencephalus. CONCLUSION: The technique provided a clear improvement in fusion of pre- and post-implantation scans, although the accuracy is difficult to evaluate.
Resumo:
Medical doctors often do not trust the result of fully automatic segmentations because they have no possibility to make corrections if necessary. On the other hand, manual corrections can introduce a user bias. In this work, we propose to integrate the possibility for quick manual corrections into a fully automatic segmentation method for brain tumor images. This allows for necessary corrections while maintaining a high objectiveness. The underlying idea is similar to the well-known Grab-Cut algorithm, but here we combine decision forest classification with conditional random field regularization for interactive segmentation of 3D medical images. The approach has been evaluated by two different users on the BraTS2012 dataset. Accuracy and robustness improved compared to a fully automatic method and our interactive approach was ranked among the top performing methods. Time for computation including manual interaction was less than 10 minutes per patient, which makes it attractive for clinical use.
Resumo:
In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all subregions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
Resumo:
Over the last decade, a plethora of computer-aided diagnosis (CAD) systems have been proposed aiming to improve the accuracy of the physicians in the diagnosis of interstitial lung diseases (ILD). In this study, we propose a scheme for the classification of HRCT image patches with ILD abnormalities as a basic component towards the quantification of the various ILD patterns in the lung. The feature extraction method relies on local spectral analysis using a DCT-based filter bank. After convolving the image with the filter bank, q-quantiles are computed for describing the distribution of local frequencies that characterize image texture. Then, the gray-level histogram values of the original image are added forming the final feature vector. The classification of the already described patches is done by a random forest (RF) classifier. The experimental results prove the superior performance and efficiency of the proposed approach compared against the state-of-the-art.
Resumo:
In this paper we propose a new fully-automatic method for localizing and segmenting 3D intervertebral discs from MR images, where the two problems are solved in a unified data-driven regression and classification framework. We estimate the output (image displacements for localization, or fg/bg labels for segmentation) of image points by exploiting both training data and geometric constraints simultaneously. The problem is formulated in a unified objective function which is then solved globally and efficiently. We validate our method on MR images of 25 patients. Taking manually labeled data as the ground truth, our method achieves a mean localization error of 1.3 mm, a mean Dice metric of 87%, and a mean surface distance of 1.3 mm. Our method can be applied to other localization and segmentation tasks.
Resumo:
Optimal adjustment of brain networks allows the biased processing of information in response to the demand of environments and is therefore prerequisite for adaptive behaviour. It is widely shown that a biased state of networks is associated with a particular cognitive process. However, those associations were identified by backward categorization of trials and cannot provide a causal association with cognitive processes. This problem still remains a big obstacle to advance the state of our field in particular human cognitive neuroscience. In my talk, I will present two approaches to address the causal relationships between brain network interactions and behaviour. Firstly, we combined connectivity analysis of fMRI data and a machine leaning method to predict inter-individual differences of behaviour and responsiveness to environmental demands. The connectivity-based classification approach outperforms local activation-based classification analysis, suggesting that interactions in brain networks carry information of instantaneous cognitive processes. Secondly, we have recently established a brand new method combining transcranial alternating current stimulation (tACS), transcranial magnetic stimulation (TMS), and EEG. We use the method to measure signal transmission between brain areas while introducing extrinsic oscillatory brain activity and to study causal association between oscillatory activity and behaviour. We show that phase-matched oscillatory activity creates the phase-dependent modulation of signal transmission between brain areas, while phase-shifted oscillatory activity blunts the phase-dependent modulation. The results suggest that phase coherence between brain areas plays a cardinal role in signal transmission in the brain networks. In sum, I argue that causal approaches will provide more concreate backbones to cognitive neuroscience.
Resumo:
Spinal image analysis and computer assisted intervention have emerged as new and independent research areas, due to the importance of treatment of spinal diseases, increasing availability of spinal imaging, and advances in analytics and navigation tools. Among others, multiple modality spinal image analysis and spinal navigation tools have emerged as two keys in this new area. We believe that further focused research in these two areas will lead to a much more efficient and accelerated research path, avoiding detours that exist in other applications, such as in brain and heart.
Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network
Resumo:
Automated tissue characterization is one of the most crucial components of a computer aided diagnosis (CAD) system for interstitial lung diseases (ILDs). Although much research has been conducted in this field, the problem remains challenging. Deep learning techniques have recently achieved impressive results in a variety of computer vision problems, raising expectations that they might be applied in other domains, such as medical image analysis. In this paper, we propose and evaluate a convolutional neural network (CNN), designed for the classification of ILD patterns. The proposed network consists of 5 convolutional layers with 2×2 kernels and LeakyReLU activations, followed by average pooling with size equal to the size of the final feature maps and three dense layers. The last dense layer has 7 outputs, equivalent to the classes considered: healthy, ground glass opacity (GGO), micronodules, consolidation, reticulation, honeycombing and a combination of GGO/reticulation. To train and evaluate the CNN, we used a dataset of 14696 image patches, derived by 120 CT scans from different scanners and hospitals. To the best of our knowledge, this is the first deep CNN designed for the specific problem. A comparative analysis proved the effectiveness of the proposed CNN against previous methods in a challenging dataset. The classification performance (~85.5%) demonstrated the potential of CNNs in analyzing lung patterns. Future work includes, extending the CNN to three-dimensional data provided by CT volume scans and integrating the proposed method into a CAD system that aims to provide differential diagnosis for ILDs as a supportive tool for radiologists.
Resumo:
Erratum to: Acta Neuropathol (2012) 123:273–284. DOI 10.1007/s00401‑011‑0914‑z. The authors would like to correct Fig. 3 of the original manuscript, since the image in Fig. 3b does not correspond to a VEGF treated animal. Corrected Fig. 3 is shown below. We apologize for this mistake.