129 resultados para Auditory-visual Interaction
em University of Queensland eSpace - Australia
Resumo:
The McGurk effect, in which auditory [ba] dubbed onto [go] lip movements is perceived as da or tha, was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4(1)/(2)-month-olds were tested in a habituation-test paradigm, in which 2 an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [deltaa] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [deltaa], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [deltaa] were no more familiar than [ba]. These results are consistent with infants'perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. (C) 2004 Wiley Periodicals, Inc.
Resumo:
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
PURPOSE. The driving environment is becoming increasingly complex, including both visual and auditory distractions within the in- vehicle and external driving environments. This study was designed to investigate the effect of visual and auditory distractions on a performance measure that has been shown to be related to driving safety, the useful field of view. METHODS. A laboratory study recorded the useful field of view in 28 young visually normal adults (mean 22.6 +/- 2.2 years). The useful field of view was measured in the presence and absence of visual distracters (of the same angular subtense as the target) and with three levels of auditory distraction (none, listening only, listening and responding). RESULTS. Central errors increased significantly (P < 0.05) in the presence of auditory but not visual distracters, while peripheral errors increased in the presence of both visual and auditory distracters. Peripheral errors increased with eccentricity and were greatest in the inferior region in the presence of distracters. CONCLUSIONS. Visual and auditory distracters reduce the extent of the useful field of view, and these effects are exacerbated in inferior and peripheral locations. This result has significant ramifications for road safety in an increasingly complex in-vehicle and driving environment.
Resumo:
A dissociation between two putative measures of resource allocation skin conductance responding, and secondary task reaction time (RT), has been observed during auditory discrimination tasks. Four experiments investigated the time course of the dissociation effect with a visual discrimination task. participants were presented with circles and ellipses and instructed to count the number of longer-than-usual presentations of one shape (task-relevant) and to ignore presentations of the other shape (task-irrelevant). Concurrent with this task, participants made a speeded motor response to an auditory probe. Experiment 1 showed that skin conductance responses were larger during task-relevant stimuli than during task-irrelevant stimuli, whereas RT to probes presented at 150 ms following shape onset was slower during task-irrelevant stimuli. Experiments 2 to 4 found slower RT during task-irrelevant stimuli at probes presented at 300 ms before shape onset until 150 ms following shape onset. At probes presented 3,000 and 4,000 ms following shape onset probe RT was slower during task-relevant stimuli. The similarities between the observed time course and the so-called psychological refractory period (PRF) effect are discussed.
Resumo:
Spontaneous and tone-evoked changes in light reflectance were recorded from primary auditory cortex (A1) of anesthetized cats (barbiturate induction, ketamine maintenance). Spontaneous 0.1-Hz oscillations of reflectance of 540- and 690-nm light were recorded in quiet. Stimulation with tone pips evoked localized reflectance decreases at 540 nm in 3/10 cats. The distribution of patches activated by tones of different frequencies reflected the known tonotopic organization of auditory cortex. Stimulus-evoked reflectance changes at 690 nm were observed in 9/10 cats but lacked stimulus-dependent topography. In two experiments, stimulus-evoked optical signals at 540 nm were compared with multiunit responses to the same stimuli recorded at multiple sites. A significant correlation (P < 0.05) between magnitude of reflectance decrease and multiunit response strength was evident in only one of five stimulus conditions in each experiment. There was no significant correlation when data were pooled across all stimulus conditions in either experiment. In one experiment, the spatial distribution of activated patches, evident in records of spontaneous activity at 540 nm, was similar to that of patches activated by tonal stimuli. These results suggest that local cerebral blood volume changes reflect the gross tonotopic organization of A1 but are not restricted to the sites of spiking neurons.
Resumo:
We investigated how the relative direction of limb movements in external space (iso- and non-isodirectionality), muscular constraints (the relative timing of homologous muscle activation) and the egocentric frame of reference (moving simultaneously toward/away the longitudinal axis of the body) contribute to the stability of coordinated movements. In the first experiment, we attempted to determine the respective stability of isodirectional and non-isodirectional movements in between-persons coordination. In a second experiment, we determined the effect of the relative direction in external space, and of muscular constraints, on pattern stability during a within-person bimanual coordination task. In the third experiment we dissociated the effects on pattern stability of the muscular constraints, relative direction and egocentric frame of reference. The results showed that (1) simultaneous activation of homologous muscles resulted in more stable performance than simultaneous activation of non-homologous muscles during within-subject coordination, and that (2) isodirectional movements were more stable than non-isodirectional movements during between-persons coordination, confirming the role of the relative direction of the moving limbs in the stability of bimanual coordination. Moreover, the egocentric constraint was to some extent found distinguishable from the effect of the relative direction of the moving limbs in external space, and from the effect of the relative timing of muscle activation. In summary, the present study showed that relative direction of the moving limbs in external space and muscular constraints may interact either to stabilize or destabilize coordination patterns. (C) 2003 Published by Elsevier B.V.
Resumo:
The placement of monocular laser lesions in the adult cat retina produces a lesion projection zone (LPZ) in primary visual cortex (V1) in which the majority of neurons have a normally located receptive field (RF) for stimulation of the intact eye and an ectopically located RF ( displaced to intact retina at the edge of the lesion) for stimulation of the lesioned eye. Animals that had such lesions for 14 - 85 d were studied under halothane and nitrous oxide anesthesia with conventional neurophysiological recording techniques and stimulation of moving light bars. Previous work suggested that a candidate source of input, which could account for the development of the ectopic RFs, was long-range horizontal connections within V1. The critical contribution of such input was examined by placing a pipette containing the neurotoxin kainic acid at a site in the normal V1 visual representation that overlapped with the ectopic RF recorded at a site within the LPZ. Continuation of well defined responses to stimulation of the intact eye served as a control against direct effects of the kainic acid at the LPZ recording site. In six of seven cases examined, kainic acid deactivation of neurons at the injection site blocked responsiveness to lesioned-eye stimulation at the ectopic RF for the LPZ recording site. We therefore conclude that long-range horizontal projections contribute to the dominant input underlying the capacity for retinal lesion-induced plasticity in V1.
Resumo:
There is still a great deal of opportunity for research on contextual interactive immersion in virtual heritage environments. The general failure of virtual environment technology to create engaging and educational experiences may be attributable not just to deficiencies in technology or in visual fidelity, but also to a lack of contextual and performative-based interaction, such as that found in games. However, there is little written so far on exactly how game-style interaction can help improve virtual learning environments.
Resumo:
In mapping the evolutionary process of online news and the socio-cultural factors determining this development, this paper has a dual purpose. First, in reworking the definition of “online communication”, it argues that despite its seemingly sudden emergence in the 1990s, the history of online news started right in the early days of the telegraphs and spread throughout the development of the telephone and the fax machine before becoming computer-based in the 1980s and Web-based in the 1990s. Second, merging macro-perspectives on the dynamic of media evolution by DeFleur and Ball-Rokeach (1989) and Winston (1998), the paper consolidates a critical point for thinking about new media development: that something technically feasible does not always mean that it will be socially accepted and/or demanded. From a producer-centric perspective, the birth and development of pre-Web online news forms have been more or less generated by the traditional media’s sometimes excessive hype about the power of new technologies. However, placing such an emphasis on technological potentials at the expense of their social conditions not only can be misleading but also can be detrimental to the development of new media, including the potential of today’s online news.
Resumo:
Some motor tasks can be completed, quite literally, with our eyes shut. Most people can touch their nose without looking or reach for an object after only a brief glance at its location. This distinction leads to one of the defining questions of movement control: is information gleaned prior to starting the movement sufficient to complete the task (open loop), or is feedback about the progress of the movement required (closed loop)? One task that has commanded considerable interest in the literature over the years is that of steering a vehicle, in particular lane-correction and lane-changing tasks. Recent work has suggested that this type of task can proceed in a fundamentally open loop manner [1 and 2], with feedback mainly serving to correct minor, accumulating errors. This paper reevaluates the conclusions of these studies by conducting a new set of experiments in a driving simulator. We demonstrate that, in fact, drivers rely on regular visual feedback, even during the well-practiced steering task of lane changing. Without feedback, drivers fail to initiate the return phase of the maneuver, resulting in systematic errors in final heading. The results provide new insight into the control of vehicle heading, suggesting that drivers employ a simple policy of “turn and see,” with only limited understanding of the relationship between steering angle and vehicle heading.