67 resultados para experimental visual perception


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Visual perception is not identical in the upper and lower visual hemifields. The mechanisms behind this difference can be found at the retinal, cortical, or higher attentional level. In this study, a new visual test battery, that involves real-time comparisons of complex visual stimuli, such as shape of objects, and speed of moving dot patterns, in the upper and lower visual hemifields, is presented. This study represents, to our knowledge, the first to implement such a visual test battery in an immersive environment composed of a hemisphere, in order to present visual stimuli in precise regions of the visual field. Ten healthy volunteers were tested in this pilot study. The results showed a higher accuracy in the image matching when the visual test was performed in the lower visual hemifield.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND Patients with downbeat nystagmus syndrome suffer from oscillopsia, which leads to an unstable visual perception and therefore impaired visual acuity. The aim of this study was to use real-time computer-based visual feedback to compensate for the destabilizing slow phase eye movements. METHODS The patients were sitting in front of a computer screen with the head fixed on a chin rest. The eye movements were recorded by an eye tracking system (EyeSeeCam®). We tested the visual acuity with a fixed Landolt C (static) and during real-time feedback driven condition (dynamic) in gaze straight ahead and (20°) sideward gaze. In the dynamic condition, the Landolt C moved according to the slow phase eye velocity of the downbeat nystagmus. The Shapiro-Wilk test was used to test for normal distribution and one-way ANOVA for comparison. RESULTS Ten patients with downbeat nystagmus were included in the study. Median age was 76 years and the median duration of symptoms was 6.3 years (SD +/- 3.1y). The mean slow phase velocity was moderate during gaze straight ahead (1.44°/s, SD +/- 1.18°/s) and increased significantly in sideward gaze (mean left 3.36°/s; right 3.58°/s). In gaze straight ahead, we found no difference between the static and feedback driven condition. In sideward gaze, visual acuity improved in five out of ten subjects during the feedback-driven condition (p = 0.043). CONCLUSIONS This study provides proof of concept that non-invasive real-time computer-based visual feedback compensates for the SPV in DBN. Therefore, real-time visual feedback may be a promising aid for patients suffering from oscillopsia and impaired text reading on screen. Recent technological advances in the area of virtual reality displays might soon render this approach feasible in fully mobile settings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Visual imagery – similar to visual perception – activates feature-specific and category-specific visual areas. This is frequently observed in experiments where the instruction is to imagine stimuli that have been shown immediately before the imagery task. Hence, feature-specific activation could be related to the short-term memory retrieval of previously presented sensory information. Here, we investigated mental imagery of stimuli that subjects had not seen before, eliminating the effects of short-term memory. We recorded brain activation using fMRI while subjects performed a behaviourally controlled guided imagery task in predefined retinotopic coordinates to optimize sensitivity in early visual areas. Whole brain analyses revealed activation in a parieto-frontal network and lateral–occipital cortex. Region of interest (ROI) based analyses showed activation in left hMT/V5+. Granger causality mapping taking left hMT/V5+ as source revealed an imagery-specific directed influence from the left inferior parietal lobule (IPL). Interestingly, we observed a negative BOLD response in V1–3 during imagery, modulated by the retinotopic location of the imagined motion trace. Our results indicate that rule-based motion imagery can activate higher-order visual areas involved in motion perception, with a role for top-down directed influences originating in IPL. Lower-order visual areas (V1, V2 and V3) were down-regulated during this type of imagery, possibly reflecting inhibition to avoid visual input from interfering with the imagery construction. This suggests that the activation in early visual areas observed in previous studies might be related to short- or long-term memory retrieval of specific sensory experiences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The right and left visual hemifields are represented in different cerebral hemispheres and are bound together by connections through the corpus callosum. Much has been learned on the functions of these connections from split-brain patients [1-4], but little is known about their contribution to conscious visual perception in healthy humans. We used diffusion tensor imaging and functional magnetic resonance imaging to investigate which callosal connections contribute to the subjective experience of a visual motion stimulus that requires interhemispheric integration. The "motion quartet" is an ambiguous version of apparent motion that leads to perceptions of either horizontal or vertical motion [5]. Interestingly, observers are more likely to perceive vertical than horizontal motion when the stimulus is presented centrally in the visual field [6]. This asymmetry has been attributed to the fact that, with central fixation, perception of horizontal motion requires integration across hemispheres whereas perception of vertical motion requires only intrahemispheric processing [7]. We are able to show that the microstructure of individually tracked callosal segments connecting motion-sensitive areas of the human MT/V5 complex (hMT/V5+; [8]) can predict the conscious perception of observers. Neither connections between primary visual cortex (V1) nor other surrounding callosal regions exhibit a similar relationship.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objectives: In fast ball sports like beach volleyball, decision-making skills are a determining factor for excellent performance. The current investigation aimed to identify factors that influence the decisionmaking process in top-level beach volleyball defense in order to find relevant aspects for further research. For this reason, focused interviews with top players in international beach volleyball were conducted and analyzed with respect to decision-making characteristics. Design: Nineteen world-tour beach volleyball defense players, including seven Olympic or world champions, were interviewed, focusing on decision-making factors, gaze behavior, and interactions between the two. Methods: Verbal data were analyzed by inductive content analysis according to Mayring (2008). This approach allows categories to emerge from the interview material itself instead of forcing data into preset classifications and theoretical concepts. Results: The data analysis showed that, for top-level beach volleyball defense, decision making depends on opponent specifics, external context, situational context, opponent's movements, and intuition. Information on gaze patterns and visual cues revealed general tendencies indicating optimal gaze strategies that support excellent decision making. Furthermore, the analysis highlighted interactions between gaze behavior, visual information, and domain-specific knowledge. Conclusions: The present findings provide information on visual perception, domain-specific knowledge, and interactions between the two that are relevant for decision making in top-level beach volleyball defense. The results can be used to inform sports practice and to further untangle relevant mechanisms underlying decision making in complex game situations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: To assess the neuropsychological outcome as a safety measure and quality control in patients with subthalamic nucleus (STN) stimulation for PD. Background: Deep brain stimulation (DBS) is considered a relatively safe treatment used in patients with movement disorders. However, neuropsychological alterations have been reported in patients with STN DBS for PD. Cognition and mood are important determinants of quality of life in PD patients and must be assessed for safety control. Methods: Seventeen consecutive patients (8 women) who underwent STN DBS for PD have been assessed before and 4 months after surgery. Besides motor symptoms (UPDRS-III), mood (Beck Depression Inventory, Hamilton Depression Rating Scale) and neuropsychological aspects, mainly executive functions, have been assessed (mini mental state examination, semantic and phonematic verbal fluency, go-no go test, stroop test, trail making test, tests of alertness and attention, digit span, wordlist learning, praxia, Boston naming test, figure drawing, visual perception). Paired t-tests were used for comparisons before and after surgery. Results: Patients were 61.6±7.8 years old at baseline assessment. All surgeries were performed without major adverse events. Motor symptoms ‘‘on’’ medication remained stable whereas they improved in the ‘‘off’’ condition (p<0.001). Mood was not depressed before surgery and remained unchanged at follow-up. All neuropsychological assessment outcome measures remained stable at follow-up with the exception of semantic verbal fluency and wordlist learning. Semantic verbal fluency decreased by 21±16% (p<0.001) and there was a trend to worse phonematic verbal fluency after surgery (p=0.06). Recall of a list of 10 words was worse after surgery only for the third attempt of recall (13%, p<0.005). Conclusions: Verbal fluency decreased in our patients after STN DBS, as previously reported. The procedure was otherwise safe and did not lead to deterioration of mood.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Autism has been associated with enhanced local processing on visual tasks. Originally, this was based on findings that individuals with autism exhibited peak performance on the block design test (BDT) from the Wechsler Intelligence Scales. In autism, the neurofunctional correlates of local bias on this test have not yet been established, although there is evidence of alterations in the early visual cortex. Functional MRI was used to analyze hemodynamic responses in the striate and extrastriate visual cortex during BDT performance and a color counting control task in subjects with autism compared to healthy controls. In autism, BDT processing was accompanied by low blood oxygenation level-dependent signal changes in the right ventral quadrant of V2. Findings indicate that, in autism, locally oriented processing of the BDT is associated with altered responses of angle and grating-selective neurons, that contribute to shape representation, figure-ground, and gestalt organization. The findings favor a low-level explanation of BDT performance in autism.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE This study aimed to test the prediction from the Perception and Attention Deficit model of complex visual hallucinations (CVH) that impairments in visual attention and perception are key risk factors for complex hallucinations in eye disease and dementia. METHODS Two studies ran concurrently to investigate the relationship between CVH and impairments in perception (picture naming using the Graded Naming Test) and attention (Stroop task plus a novel Imagery task). The studies were in two populations-older patients with dementia (n = 28) and older people with eye disease (n = 50) with a shared control group (n = 37). The same methodology was used in both studies, and the North East Visual Hallucinations Inventory was used to identify CVH. RESULTS A reliable relationship was found for older patients with dementia between impaired perceptual and attentional performance and CVH. A reliable relationship was not found in the population of people with eye disease. CONCLUSIONS The results add to previous research that object perception and attentional deficits are associated with CVH in dementia, but that risk factors for CVH in eye disease are inconsistent, suggesting that dynamic rather than static impairments in attentional processes may be key in this population.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform and asked to indicate the direction of motion. A total of eleven participants underwent 3,360 practice trials, distributed over twelve (Experiment 1) or 6 days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Motor-performance-enhancing effects of long final fixations before movement initiation – a phenomenon called Quiet Eye (QE) – have repeatedly been demonstrated. Drawing on the information-processing framework, it is assumed that the QE supports information processing revealed by the close link between QE duration and task demands concerning, in particular, response selection and movement parameterisation. However, the question remains whether the suggested mechanism also holds for processes referring to stimulus identification. Thus, in a series of two experiments, performance in a targeting task was tested as a function of experimentally manipulated visual processing demands as well as experimentally manipulated QE durations. The results support the suggested link because a performance-enhancing QE effect was found under increased visual processing demands only: Whereas QE duration did not affect performance as long as positional information was preserved (Experiment 1), in the full vs. no target visibility comparison, QE efficiency turned out to depend on information processing time as soon as the interval falls below a certain threshold (Experiment 2). Thus, the results rather contradict alternative, e.g., posture-based explanations of QE effects and support the assumption that the crucial mechanism behind the QE phenomenon is rooted in the cognitive domain.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström’s sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St–Co, Co–St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St–Co than for Co–St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.