9 resultados para Visual support

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prompted reports of recall of spontaneous, conscious experiences were collected in a no-input, no-task, no-response paradigm (30 random prompts to each of 13 healthy volunteers). The mentation reports were classified into visual imagery and abstract thought. Spontaneous 19-channel brain electric activity (EEG) was continuously recorded, viewed as series of momentary spatial distributions (maps) of the brain electric field and segmented into microstates, i.e. into time segments characterized by quasi-stable landscapes of potential distribution maps which showed varying durations in the sub-second range. Microstate segmentation used a data-driven strategy. Different microstates, i.e. different brain electric landscapes must have been generated by activity of different neural assemblies and therefore are hypothesized to constitute different functions. The two types of reported experiences were associated with significantly different microstates (mean duration 121 ms) immediately preceding the prompts; these microstates showed, across subjects, for abstract thought (compared to visual imagery) a shift of the electric gravity center to the left and a clockwise rotation of the field axis. Contrariwise, the microstates 2 s before the prompt did not differ between the two types of experiences. The results support the hypothesis that different microstates of the brain as recognized in its electric field implement different conscious, reportable mind states, i.e. different classes (types) of thoughts (mentations); thus, the microstates might be candidates for the `atoms of thought'.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Many patients with Posttraumatic Stress Disorder (PTSD) feel overwhelmed in situations with high levels of sensory input, as in crowded situations with complex sensory characteristics. These difficulties might be related to subtle sensory processing deficits similar to those that have been found for sounds in electrophysiological studies. METHOD: Visual processing was investigated with functional magnetic resonance imaging in trauma-exposed participants with (N = 18) and without PTSD (N = 21) employing a picture-viewing task. RESULTS: Activity observed in response to visual scenes was lower in PTSD participants 1) in the ventral stream of the visual system, including striate and extrastriate, inferior temporal, and entorhinal cortices, and 2) in dorsal and ventral attention systems (P < 0.05, FWE-corrected). These effects could not be explained by the emotional salience of the pictures. CONCLUSIONS: Visual processing was substantially altered in PTSD in the ventral visual stream, a component of the visual system thought to be responsible for object property processing. Together with previous reports of subtle auditory deficits in PTSD, these findings provide strong support for potentially important sensory processing deficits, whose origins may be related to dysfunctional attention processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motor-performance-enhancing effects of long final fixations before movement initiation – a phenomenon called Quiet Eye (QE) – have repeatedly been demonstrated. Drawing on the information-processing framework, it is assumed that the QE supports information processing revealed by the close link between QE duration and task demands concerning, in particular, response selection and movement parameterisation. However, the question remains whether the suggested mechanism also holds for processes referring to stimulus identification. Thus, in a series of two experiments, performance in a targeting task was tested as a function of experimentally manipulated visual processing demands as well as experimentally manipulated QE durations. The results support the suggested link because a performance-enhancing QE effect was found under increased visual processing demands only: Whereas QE duration did not affect performance as long as positional information was preserved (Experiment 1), in the full vs. no target visibility comparison, QE efficiency turned out to depend on information processing time as soon as the interval falls below a certain threshold (Experiment 2). Thus, the results rather contradict alternative, e.g., posture-based explanations of QE effects and support the assumption that the crucial mechanism behind the QE phenomenon is rooted in the cognitive domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND It has been suggested that sleep apnea syndrome may play a role in normal-tension glaucoma contributing to optic nerve damage. The purpose of this study was to evaluate if optic nerve and visual field parameters in individuals with sleep apnea syndrome differ from those in controls. PATIENTS AND METHODS From the records of the sleep laboratory at the University Hospital in Bern, Switzerland, we recruited consecutive patients with severe sleep apnea syndrome proven by polysomnography, apnea-hypopnea index >20, as well as no sleep apnea controls with apnea-hypopnea index <10. Participants had to be unknown to the ophtalmology department and had to have no recent eye examination in the medical history. All participants underwent a comprehensive eye examination, scanning laser polarimetry (GDx VCC, Carl Zeiss Meditec, Dublin, California), scanning laser ophthalmoscopy (Heidelberg Retina Tomograph II, HRT II), and automated perimetry (Octopus 101 Programm G2, Haag-Streit Diagnostics, Koeniz, Switzerland). Mean values of the parameters of the two groups were compared by t-test. RESULTS The sleep apnea group consisted of 69 eyes of 35 patients; age 52.7 ± 9.7 years, apnea-hypopnea index 46.1 ± 24.8. As controls served 38 eyes of 19 patients; age 45.8 ± 11.2 years, apnea-hypopnea index 4.8 ± 1.9. A difference was found in mean intraocular pressure, although in a fully overlapping range, sleep apnea group: 15.2 ± 3.1, range 8-22 mmHg, controls: 13.6 ± 2.3, range 9-18 mmHg; p<0.01. None of the extended visual field, optic nerve head (HRT) and retinal nerve fiber layer (GDx VCC) parameters showed a significant difference between the groups. CONCLUSION Visual field, optic nerve head, and retinal nerve fiber layer parameters in patients with sleep apnea did not differ from those in the control group. Our results do not support a pathogenic relationship between sleep apnea syndrome and glaucoma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

According to the direct matching hypothesis, perceived movements automatically activate existing motor components through matching of the perceived gesture and its execution. The aim of the present study was to test the direct matching hypothesis by assessing whether visual exploration behavior correlate with deficits in gestural imitation in left hemisphere damaged (LHD) patients. Eighteen LHD patients and twenty healthy control subjects took part in the study. Gesture imitation performance was measured by the test for upper limb apraxia (TULIA). Visual exploration behavior was measured by an infrared eye-tracking system. Short videos including forty gestures (20 meaningless and 20 communicative gestures) were presented. Cumulative fixation duration was measured in different regions of interest (ROIs), namely the face, the gesturing hand, the body, and the surrounding environment. Compared to healthy subjects, patients fixated significantly less the ROIs comprising the face and the gesturing hand during the exploration of emblematic and tool-related gestures. Moreover, visual exploration of tool-related gestures significantly correlated with tool-related imitation as measured by TULIA in LHD patients. Patients and controls did not differ in the visual exploration of meaningless gestures, and no significant relationships were found between visual exploration behavior and the imitation of emblematic and meaningless gestures in TULIA. The present study thus suggests that altered visual exploration may lead to disturbed imitation of tool related gestures, however not of emblematic and meaningless gestures. Consequently, our findings partially support the direct matching hypothesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We tested the predictions of Attentional Control Theory (ACT) by examining how anxiety affects visual search strategies, performance efficiency, and performance effectiveness using a dynamic, temporal-constrained anticipation task. Higher and lower skilled players viewed soccer situations under 2 task constraints (near vs. far situation) and were tested under high (HA) and low (LA) anxiety conditions. Response accuracy (effectiveness) and response time, perceived mental effort, and eye-movements (all efficiency) were recorded. A significant increase in anxiety was evidenced by higher state anxiety ratings on the MRF-L scale. Increased anxiety led to decreased performance efficiency because response times and mental effort increased for both skill groups whereas response accuracy did not differ. Anxiety influenced search strategies, with higher skilled players showing a decrease in number of fixation locations for far situations under HA compared with LA condition when compared with lower skilled players. Findings provide support for ACT with anxiety impairing processing efficiency and, potentially, top-down attentional control across different task constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A common finding in time psychophysics is that temporal acuity is much better for auditory than for visual stimuli. The present study aimed to examine modality-specific differences in duration discrimination within the conceptual framework of the Distinct Timing Hypothesis. This theoretical account proposes that durations in the lower milliseconds range are processed automatically while longer durations are processed by a cognitive mechanism. A sample of 46 participants performed two auditory and visual duration discrimination tasks with extremely brief (50-ms standard duration) and longer (1000-ms standard duration) intervals. Better discrimination performance for auditory compared to visual intervals could be established for extremely brief and longer intervals. However, when performance on duration discrimination of longer intervals in the 1-s range was controlled for modality-specific input from the sensory-automatic timing mechanism, the visual-auditory difference disappeared completely as indicated by virtually identical Weber fractions for both sensory modalities. These findings support the idea of a sensory-automatic mechanism underlying the observed visual-auditory differences in duration discrimination of extremely brief intervals in the millisecond range and longer intervals in the 1-s range. Our data are consistent with the notion of a gradual transition from a purely modality-specific, sensory-automatic to a more cognitive, amodal timing mechanism. Within this transition zone, both mechanisms appear to operate simultaneously but the influence of the sensory-automatic timing mechanism is expected to continuously decrease with increasing interval duration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements