907 resultados para audio-visual information


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the present study was to investigate whether healthy first-degree relatives of schizophrenia patients show reduced sensitivity performance, higher intra-individual variability (IIV) in reaction time (RT), and a steeper decline in sensitivity over time in a sustained attention task. Healthy first-degree relatives of schizophrenia patients (n=23) and healthy control subjects (n=46) without a family history of schizophrenia performed a demanding version of the Rapid Visual Information Processing task (RVIP). RTs, hits, false alarms, and the sensitivity index A' were assessed. The relatives were significantly less sensitive, tended to have higher IIV in RT, but sustained the impaired level of sensitivity over time. Impaired performance on the RVIP is a possible endophenotype for schizophrenia. Higher IIV in RT, apparently caused by impaired context representations, might result in fluctuations in control and lead to more frequent attentional lapses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motor-performance-enhancing effects of long final fixations before movement initiation – a phenomenon called Quiet Eye (QE) – have repeatedly been demonstrated. Drawing on the information-processing framework, it is assumed that the QE supports information processing revealed by the close link between QE duration and task demands concerning, in particular, response selection and movement parameterisation. However, the question remains whether the suggested mechanism also holds for processes referring to stimulus identification. Thus, in a series of two experiments, performance in a targeting task was tested as a function of experimentally manipulated visual processing demands as well as experimentally manipulated QE durations. The results support the suggested link because a performance-enhancing QE effect was found under increased visual processing demands only: Whereas QE duration did not affect performance as long as positional information was preserved (Experiment 1), in the full vs. no target visibility comparison, QE efficiency turned out to depend on information processing time as soon as the interval falls below a certain threshold (Experiment 2). Thus, the results rather contradict alternative, e.g., posture-based explanations of QE effects and support the assumption that the crucial mechanism behind the QE phenomenon is rooted in the cognitive domain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a novel system and a control strategy for visual following of a 3D moving object by an Unmanned Aerial Vehicle UAV. The presented strategy is based only on the visual information given by an adaptive tracking method based on the color information, which jointly with the dynamics of a camera fixed to a rotary wind UAV are used to develop an Image-based visual servoing IBVS system. This system is focused on continuously following a 3D moving target object, maintaining it with a fixed distance and centered on the image plane. The algorithm is validated on real flights on outdoors scenarios, showing the robustness of the proposed systems against winds perturbations, illumination and weather changes among others. The obtained results indicate that the proposed algorithms is suitable for complex controls task, such object following and pursuit, flying in formation, as well as their use for indoor navigation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Whenever we open our eyes, we are confronted with an overwhelming amount of visual information. Covert attention allows us to select visual information at a cued location, without eye movements, and to grant such information priority in processing. Covert attention can be voluntarily allocated, to a given location according to goals, or involuntarily allocated, in a reflexive manner, to a cue that appears suddenly in the visual field. Covert attention improves discriminability in a wide variety of visual tasks. An important unresolved issue is whether covert attention can also speed the rate at which information is processed. To address this issue, it is necessary to obtain conjoint measures of the effects of covert attention on discriminability and rate of information processing. We used the response-signal speed-accuracy tradeoff (SAT) procedure to derive measures of how cueing a target location affects speed and accuracy in a visual search task. Here, we show that covert attention not only improves discriminability but also accelerates the rate of information processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The retina is a very complex neural structure, which performs spatial, temporal, and chromatic processing on visual information and converts it into a compact ‘digital’ format composed of neural impulses. This paper presents a new compiler-based framework able to describe, simulate and validate custom retina models. The framework is compatible with the most usual neural recording and analysis tools, taking advantage of the interoperability with these kinds of applications. Furthermore it is possible to compile the code to generate accelerated versions of the visual processing models compatible with COTS microprocessors, FPGAs or GPUs. The whole system represents an ongoing work to design and develop a functional visual neuroprosthesis. Several case studies are described to assess the effectiveness and usefulness of the framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.