3 resultados para Multimodal Man-Machine Interface
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
BACKGROUND AND PURPOSE Reproducible segmentation of brain tumors on magnetic resonance images is an important clinical need. This study was designed to evaluate the reliability of a novel fully automated segmentation tool for brain tumor image analysis in comparison to manually defined tumor segmentations. METHODS We prospectively evaluated preoperative MR Images from 25 glioblastoma patients. Two independent expert raters performed manual segmentations. Automatic segmentations were performed using the Brain Tumor Image Analysis software (BraTumIA). In order to study the different tumor compartments, the complete tumor volume TV (enhancing part plus non-enhancing part plus necrotic core of the tumor), the TV+ (TV plus edema) and the contrast enhancing tumor volume CETV were identified. We quantified the overlap between manual and automated segmentation by calculation of diameter measurements as well as the Dice coefficients, the positive predictive values, sensitivity, relative volume error and absolute volume error. RESULTS Comparison of automated versus manual extraction of 2-dimensional diameter measurements showed no significant difference (p = 0.29). Comparison of automated versus manual segmentation of volumetric segmentations showed significant differences for TV+ and TV (p<0.05) but no significant differences for CETV (p>0.05) with regard to the Dice overlap coefficients. Spearman's rank correlation coefficients (ρ) of TV+, TV and CETV showed highly significant correlations between automatic and manual segmentations. Tumor localization did not influence the accuracy of segmentation. CONCLUSIONS In summary, we demonstrated that BraTumIA supports radiologists and clinicians by providing accurate measures of cross-sectional diameter-based tumor extensions. The automated volume measurements were comparable to manual tumor delineation for CETV tumor volumes, and outperformed inter-rater variability for overlap and sensitivity.
Resumo:
BACKGROUND: Digital imaging methods are a centrepiece for diagnosis and management of macular disease. A recently developed imaging device is composed of simultaneous confocal scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT). By means of clinical samples the benefit of this technique concerning diagnostic and therapeutic follow-up will be assessed. METHODS: The combined OCT-SLO-System (Ophthalmic Technologies Inc., Toronto, Canada) allows for confocal en-face fundus imaging and high resolution OCT scanning at the same time. OCT images are obtained from transversal line scans. One light source and the identical scanning rate yield a pixel-to-pixel correspondence of images. Three-dimensional thickness maps are derived from C-scan stacking. RESULTS: We followed-up patients with cystoid macular edema, pigment epithelium detachment, macular hole, venous branch occlusion, and vitreoretinal tractions during their course of therapy. The new imaging method illustrates the reduction of cystoid volume, e.g. after intravitreal injections of either angiostatic drugs or steroids. C-scans are used for appreciation of lesion diameters, visualisation of pathologies involving the vitreoretinal interface, and quantification of retinal thickness change. CONCLUSION: The combined OCT-SLO system creates both topographic and tomographic images of the retina. New therapeutic options can be followed-up closely by observing changes in lesion thickness and cyst volumes. For clinical use further studies are needed.
Resumo:
This paper presents a shallow dialogue analysis model, aimed at human-human dialogues in the context of staff or business meetings. Four components of the model are defined, and several machine learning techniques are used to extract features from dialogue transcripts: maximum entropy classifiers for dialogue acts, latent semantic analysis for topic segmentation, or decision tree classifiers for discourse markers. A rule-based approach is proposed for solving cross-modal references to meeting documents. The methods are trained and evaluated thanks to a common data set and annotation format. The integration of the components into an automated shallow dialogue parser opens the way to multimodal meeting processing and retrieval applications.