952 resultados para multi-modal logic
Resumo:
Given a large image set, in which very few images have labels, how to guess labels for the remaining majority? How to spot images that need brand new labels different from the predefined ones? How to summarize these data to route the user’s attention to what really matters? Here we answer all these questions. Specifically, we propose QuMinS, a fast, scalable solution to two problems: (i) Low-labor labeling (LLL) – given an image set, very few images have labels, find the most appropriate labels for the rest; and (ii) Mining and attention routing – in the same setting, find clusters, the top-'N IND.O' outlier images, and the 'N IND.R' images that best represent the data. Experiments on satellite images spanning up to 2.25 GB show that, contrasting to the state-of-the-art labeling techniques, QuMinS scales linearly on the data size, being up to 40 times faster than top competitors (GCap), still achieving better or equal accuracy, it spots images that potentially require unpredicted labels, and it works even with tiny initial label sets, i.e., nearly five examples. We also report a case study of our method’s practical usage to show that QuMinS is a viable tool for automatic coffee crop detection from remote sensing images.
Resumo:
In this paper we propose a variational approach for multimodal image registration based on the diffeomorphic demons algorithm. Diffeomorphic demons has proven to be a robust and efficient way for intensity-based image registration. However, the main drawback is that it cannot deal with multiple modalities. We propose to replace the standard demons similarity metric (image intensity differences) by point-wise mutual information (PMI) in the energy function. By comparing the accuracy between our PMI based diffeomorphic demons and the B-Spline based free-form deformation approach (FFD) on simulated deformations, we show the proposed algorithm performs significantly better.
Resumo:
In the present multi-modal study we aimed to investigate the role of visual exploration in relation to the neuronal activity and performance during visuospatial processing. To this end, event related functional magnetic resonance imaging er-fMRI was combined with simultaneous eye tracking recording and transcranial magnetic stimulation (TMS). Two groups of twenty healthy subjects each performed an angle discrimination task with different levels of difficulty during er-fMRI. The number of fixations as a measure of visual exploration effort was chosen to predict blood oxygen level-dependent (BOLD) signal changes using the general linear model (GLM). Without TMS, a positive linear relationship between the visual exploration effort and the BOLD signal was found in a bilateral fronto-parietal cortical network, indicating that these regions reflect the increased number of fixations and the higher brain activity due to higher task demands. Furthermore, the relationship found between the number of fixations and the performance demonstrates the relevance of visual exploration for visuospatial task solving. In the TMS group, offline theta bursts TMS (TBS) was applied over the right posterior parietal cortex (PPC) before the fMRI experiment started. Compared to controls, TBS led to a reduced correlation between visual exploration and BOLD signal change in regions of the fronto-parietal network of the right hemisphere, indicating a disruption of the network. In contrast, an increased correlation was found in regions of the left hemisphere, suggesting an intent to compensate functionality of the disturbed areas. TBS led to fewer fixations and faster response time while keeping accuracy at the same level, indicating that subjects explored more than actually needed.
Resumo:
During the last decade, a multi-modal approach has been established in human experimental pain research for assessing pain thresholds and responses to various experimental pain modalities. Studies have concluded that differences in responses to pain stimuli are mainly related to variation between individuals rather than variation in response to different stimulus modalities. In a factor analysis of 272 consecutive volunteers (137 men and 135 women) who underwent tests with different experimental pain modalities, it was determined whether responses to different pain modalities represent distinct individual uncorrelated dimensions of pain perception. Volunteers underwent single painful electrical stimulation, repeated painful electrical stimulation (temporal summation), test for reflex receptive field, pressure pain stimulation, heat pain stimulation, cold pain stimulation, and a cold pressor test (ice water test). Five distinct factors were found representing responses to 5 distinct experimental pain modalities: pressure, heat, cold, electrical stimulation, and reflex-receptive fields. Each of the factors explained approximately 8% to 35% of the observed variance, and the 5 factors cumulatively explained 94% of the variance. The correlation between the 5 factors was near null (median ρ=0.00, range -0.03 to 0.05), with 95% confidence intervals for pairwise correlations between 2 factors excluding any relevant correlation. Results were almost similar for analyses stratified according to gender and age. Responses to different experimental pain modalities represent different specific dimensions and should be assessed in combination in future pharmacological and clinical studies to represent the complexity of nociception and pain experience.
Resumo:
GuideView is a system designed for structured, multi-modal delivery of clinical guidelines. Clinical instructions are presented simultaneously in voice, text, pictures or video or animations. Users navigate using mouse-clicks and voice commands. An evaluation study performed at a medical simulation laboratory found that voice and video instructions were rated highly.
Resumo:
BACKGROUND AND PURPOSE Reproducible segmentation of brain tumors on magnetic resonance images is an important clinical need. This study was designed to evaluate the reliability of a novel fully automated segmentation tool for brain tumor image analysis in comparison to manually defined tumor segmentations. METHODS We prospectively evaluated preoperative MR Images from 25 glioblastoma patients. Two independent expert raters performed manual segmentations. Automatic segmentations were performed using the Brain Tumor Image Analysis software (BraTumIA). In order to study the different tumor compartments, the complete tumor volume TV (enhancing part plus non-enhancing part plus necrotic core of the tumor), the TV+ (TV plus edema) and the contrast enhancing tumor volume CETV were identified. We quantified the overlap between manual and automated segmentation by calculation of diameter measurements as well as the Dice coefficients, the positive predictive values, sensitivity, relative volume error and absolute volume error. RESULTS Comparison of automated versus manual extraction of 2-dimensional diameter measurements showed no significant difference (p = 0.29). Comparison of automated versus manual segmentation of volumetric segmentations showed significant differences for TV+ and TV (p<0.05) but no significant differences for CETV (p>0.05) with regard to the Dice overlap coefficients. Spearman's rank correlation coefficients (ρ) of TV+, TV and CETV showed highly significant correlations between automatic and manual segmentations. Tumor localization did not influence the accuracy of segmentation. CONCLUSIONS In summary, we demonstrated that BraTumIA supports radiologists and clinicians by providing accurate measures of cross-sectional diameter-based tumor extensions. The automated volume measurements were comparable to manual tumor delineation for CETV tumor volumes, and outperformed inter-rater variability for overlap and sensitivity.
Resumo:
Ophthalmologists typically acquire different image modalities to diagnose eye pathologies. They comprise e.g., Fundus photography, Optical Coherence Tomography (OCT), Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Yet, these images are often complementary and do express the same pathologies in a different way. Some pathologies are only visible in a particular modality. Thus, it is beneficial for the ophthalmologist to have these modalities fused into a single patient-specific model. The presented article’s goal is a fusion of Fundus photography with segmented MRI volumes. This adds information to MRI which was not visible before like vessels and the macula. This article’s contributions include automatic detection of the optic disc, the fovea, the optic axis and an automatic segmentation of the vitreous humor of the eye.