871 resultados para Customer Segmentation
Resumo:
Previous analyses of aortic displacement and distension using computed tomography angiography (CTA) were performed on double-oblique multi-planar reformations and did not consider through-plane motion. The aim of this study was to overcome this limitation by using a novel computational approach for the assessment of thoracic aortic displacement and distension in their true four-dimensional extent. Vessel segmentation with landmark tracking was executed on CTA of 24 patients without evidence of aortic disease. Distension magnitudes and maximum displacement vectors (MDV) including their direction were analyzed at 5 aortic locations: left coronary artery (COR), mid-ascending aorta (ASC), brachiocephalic trunk (BCT), left subclavian artery (LSA), descending aorta (DES). Distension was highest for COR (2.3 ± 1.2 mm) and BCT (1.7 ± 1.1 mm) compared with ASC, LSA, and DES (p < 0.005). MDV decreased from COR to LSA (p < 0.005) and was highest for COR (6.2 ± 2.0 mm) and ASC (3.8 ± 1.9 mm). Displacement was directed towards left and anterior at COR and ASC. Craniocaudal displacement at COR and ASC was 1.3 ± 0.8 and 0.3 ± 0.3 mm. At BCT, LSA, and DES no predominant displacement direction was observable. Vessel displacement and wall distension are highest in the ascending aorta, and ascending aortic displacement is primarily directed towards left and anterior. Craniocaudal displacement remains low even close to the left cardiac ventricle.
Resumo:
In this paper, we propose novel methodologies for the automatic segmentation and recognition of multi-food images. The proposed methods implement the first modules of a carbohydrate counting and insulin advisory system for type 1 diabetic patients. Initially the plate is segmented using pyramidal mean-shift filtering and a region growing algorithm. Then each of the resulted segments is described by both color and texture features and classified by a support vector machine into one of six different major food classes. Finally, a modified version of the Huang and Dom evaluation index was proposed, addressing the particular needs of the food segmentation problem. The experimental results prove the effectiveness of the proposed method achieving a segmentation accuracy of 88.5% and recognition rate equal to 87%
Resumo:
BACKGROUND AND PURPOSE Reproducible segmentation of brain tumors on magnetic resonance images is an important clinical need. This study was designed to evaluate the reliability of a novel fully automated segmentation tool for brain tumor image analysis in comparison to manually defined tumor segmentations. METHODS We prospectively evaluated preoperative MR Images from 25 glioblastoma patients. Two independent expert raters performed manual segmentations. Automatic segmentations were performed using the Brain Tumor Image Analysis software (BraTumIA). In order to study the different tumor compartments, the complete tumor volume TV (enhancing part plus non-enhancing part plus necrotic core of the tumor), the TV+ (TV plus edema) and the contrast enhancing tumor volume CETV were identified. We quantified the overlap between manual and automated segmentation by calculation of diameter measurements as well as the Dice coefficients, the positive predictive values, sensitivity, relative volume error and absolute volume error. RESULTS Comparison of automated versus manual extraction of 2-dimensional diameter measurements showed no significant difference (p = 0.29). Comparison of automated versus manual segmentation of volumetric segmentations showed significant differences for TV+ and TV (p<0.05) but no significant differences for CETV (p>0.05) with regard to the Dice overlap coefficients. Spearman's rank correlation coefficients (ρ) of TV+, TV and CETV showed highly significant correlations between automatic and manual segmentations. Tumor localization did not influence the accuracy of segmentation. CONCLUSIONS In summary, we demonstrated that BraTumIA supports radiologists and clinicians by providing accurate measures of cross-sectional diameter-based tumor extensions. The automated volume measurements were comparable to manual tumor delineation for CETV tumor volumes, and outperformed inter-rater variability for overlap and sensitivity.
Resumo:
In diagnostic neuroradiology as well as in radiation oncology and neurosurgery, there is an increasing demand for accurate segmentation of tumor-bearing brain images. Atlas-based segmentation is an appealing automatic technique thanks to its robustness and versatility. However, atlas-based segmentation of tumor-bearing brain images is challenging due to the confounding effects of the tumor in the patient image. In this article, we provide a brief background on brain tumor imaging and introduce the clinical perspective, before we categorize and review the state of the art in the current literature on atlas-based segmentation for tumor-bearing brain images. We also present selected methods and results from our own research in more detail. Finally, we conclude with a short summary and look at new developments in the field, including requirements for future routine clinical use.
Resumo:
In contrast to preoperative brain tumor segmentation, the problem of postoperative brain tumor segmentation has been rarely approached so far. We present a fully-automatic segmentation method using multimodal magnetic resonance image data and patient-specific semi-supervised learning. The idea behind our semi-supervised approach is to effectively fuse information from both pre- and postoperative image data of the same patient to improve segmentation of the postoperative image. We pose image segmentation as a classification problem and solve it by adopting a semi-supervised decision forest. The method is evaluated on a cohort of 10 high-grade glioma patients, with segmentation performance and computation time comparable or superior to a state-of-the-art brain tumor segmentation method. Moreover, our results confirm that the inclusion of preoperative MR images lead to a better performance regarding postoperative brain tumor segmentation.
Resumo:
Medical doctors often do not trust the result of fully automatic segmentations because they have no possibility to make corrections if necessary. On the other hand, manual corrections can introduce a user bias. In this work, we propose to integrate the possibility for quick manual corrections into a fully automatic segmentation method for brain tumor images. This allows for necessary corrections while maintaining a high objectiveness. The underlying idea is similar to the well-known Grab-Cut algorithm, but here we combine decision forest classification with conditional random field regularization for interactive segmentation of 3D medical images. The approach has been evaluated by two different users on the BraTS2012 dataset. Accuracy and robustness improved compared to a fully automatic method and our interactive approach was ranked among the top performing methods. Time for computation including manual interaction was less than 10 minutes per patient, which makes it attractive for clinical use.
Resumo:
In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all subregions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.