875 resultados para Object Segmentation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With improvements in acquisition speed and quality, the amount of medical image data to be screened by clinicians is starting to become challenging in the daily clinical practice. To quickly visualize and find abnormalities in medical images, we propose a new method combining segmentation algorithms with statistical shape models. A statistical shape model built from a healthy population will have a close fit in healthy regions. The model will however not fit to morphological abnormalities often present in the areas of pathologies. Using the residual fitting error of the statistical shape model, pathologies can be visualized very quickly. This idea is applied to finding drusen in the retinal pigment epithelium (RPE) of optical coherence tomography (OCT) volumes. A segmentation technique able to accurately segment drusen in patients with age-related macular degeneration (AMD) is applied. The segmentation is then analyzed with a statistical shape model to visualize potentially pathological areas. An extensive evaluation is performed to validate the segmentation algorithm, as well as the quality and sensitivity of the hinting system. Most of the drusen with a height of 85.5 microm were detected, and all drusen at least 93.6 microm high were detected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical coherence tomography (OCT) is a well-established image modality in ophthalmology and used daily in the clinic. Automatic evaluation of such datasets requires an accurate segmentation of the retinal cell layers. However, due to the naturally low signal to noise ratio and the resulting bad image quality, this task remains challenging. We propose an automatic graph-based multi-surface segmentation algorithm that internally uses soft constraints to add prior information from a learned model. This improves the accuracy of the segmentation and increase the robustness to noise. Furthermore, we show that the graph size can be greatly reduced by applying a smart segmentation scheme. This allows the segmentation to be computed in seconds instead of minutes, without deteriorating the segmentation accuracy, making it ideal for a clinical setup. An extensive evaluation on 20 OCT datasets of healthy eyes was performed and showed a mean unsigned segmentation error of 3.05 ±0.54 μm over all datasets when compared to the average observer, which is lower than the inter-observer variability. Similar performance was measured for the task of drusen segmentation, demonstrating the usefulness of using soft constraints as a tool to deal with pathologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Speech is often a multimodal process, presented audiovisually through a talking face. One area of speech perception influenced by visual speech is speech segmentation, or the process of breaking a stream of speech into individual words. Mitchel and Weiss (2013) demonstrated that a talking face contains specific cues to word boundaries and that subjects can correctly segment a speech stream when given a silent video of a speaker. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2013). In Experiment 1, subjects were found to spend the most time watching the eyes and mouth, with a trend suggesting that the mouth was viewed more than the eyes. Although subjects displayed significant learning of word boundaries, performance was not correlated with gaze duration on any individual feature, nor was performance correlated with a behavioral measure of autistic-like traits. However, trends suggested that as autistic-like traits increased, gaze duration of the mouth increased and gaze duration of the eyes decreased, similar to significant trends seen in autistic populations (Boratston & Blakemore, 2007). In Experiment 2, the same video was modified so that a black bar covered the eyes or mouth. Both videos elicited learning of word boundaries that was equivalent to that seen in the first experiment. Again, no correlations were found between segmentation performance and SRS scores in either condition. These results, taken with those in Experiment, suggest that neither the eyes nor mouth are critical to speech segmentation and that perhaps more global head movements indicate word boundaries (see Graf, Cosatto, Strom, & Huang, 2002). Future work will elucidate the contribution of individual features relative to global head movements, as well as extend these results to additional types of speech tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Capuchin monkeys, Cebus sp., utilize a wide array of gestural displays in the wild, including facial displays such as lip-smacking and bare-teeth displays. In captivity, they have been shown to respond to the head orientation of humans, show sensitivity to human attentional states, as well as follow human gazes behind barriers. In this study, I investigated whether tufted capuchin monkeys (Cebus apella) would attend to and utilize the gestural cues of a conspecific to obtain a hidden reward. Two capuchins faced each other in separate compartments of an apparatus with an open field in between. The open field contained two cups with holes on one side such that only one monkey, a so-called cuing monkey, could see the reward inside one of the cups. I then moved the cups toward the other signal-receiving monkey and assessed whether it would utilize untrained cues provided by the cuing monkey to select the cup containing the reward. Two of four female capuchin monkeys learned to select the cup containing the reward significantly more often than chance. Neither of these two monkeys performed over chance spontaneously, however, and the other two monkeys never performed above chance despite many blocks of trials. Successful choices by two monkeys to obtain hidden rewards provided experimental evidence that capuchin monkeys attend to and utilize the gestural cues of conspecifics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most primates live in highly complex social systems, and therefore have evolved similarly complex methods of communicating with each other. One type of communication is the use of manual gestures, which are only found in primates. No substantial evidence exists indicating that monkeys use communicative gestures in the wild. However, monkeys may demonstrate the ability to learn and/or use gestures in certain experimental paradigms since they¿ve been shown to use other visual cues such as gaze. The purpose of this study was to investigate if ten brown capuchin monkeys (Cebus apella) were able to use gestural cues from monkeys and a pointing cue from a human to obtain a hidden reward. They were then tested to determine if they could transfer this skill from monkeys to humans and from humans to monkeys. One group of monkeys was trained and tested using a conspecific as the cue giver, and was then tested with a human cue-giver. The second group of monkeys began training and testing with a human cue giver, and was then tested with a monkey cue giver. I found that two monkeys were able to use gestural cues from conspecifics (e.g., reaching) to obtain a hidden reward and then transfer this ability to a pointing cue from a human. Four monkeys learned to use the human pointing cue first, and then transferred this ability to use the gestural cues from conspecifics to obtain a hidden reward. However, the number of trials it took for each monkey to transfer the ability varied considerably. Some subjects spontaneously transferred in the minimum number of trials needed to reach my criteria for successfully obtaining hidden rewards (N = 40 trials), while others needed a large number of trials to do so (e.g. N = 190 trials). Two subjects did not perform successfully in any of the conditions in which they were tested. One subject successfully used the human pointing cue and a human pointing plus vocalization cue, but did not learn the conspecific cue. One subject learned to use the conspecific cue but not the human pointing cue. This was the first study to test if brown capuchin monkeys could use gestural cues from conspecifics to solve an object choice task. The study was also the first to test if capuchins could transfer this skill from monkeys to humans and from humans to monkeys. Results showed that capuchin monkeys were able to flexibly use communicative gestures when they were both unintentionally given by a conspecific and intentionally given by a human to indicate a source of food.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.

Relevância:

20.00% 20.00%

Publicador: