147 resultados para Visual word recognition
Resumo:
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Previously it has been shown that the branching pattern of pyramidal cells varies markedly between different cortical areas in simian primates. These differences are thought to influence the functional complexity of the cells. In particular, there is a progressive increase in the fractal dimension of pyramidal cells with anterior progression through cortical areas in the occipitotemporal (OT) visual stream, including the primary visual area (V1), the second visual area (V2), the dorsolateral area (DL, corresponding to the fourth visual area) and inferotemporal cortex (IT). However, there are as yet no data on the fractal dimension of these neurons in prosimian primates. Here we focused on the nocturnal prosimian galago (Otolemur garnetti). The fractal dimension (D), and aspect ratio (a measure of branching symmetry), was determined for I I I layer III pyramidal cells in V1, V2, DL and IT. We found, as in simian primates, that the fractal dimension of neurons increased with anterior progression from V1 through V2, DL, and IT. Two important conclusions can be drawn from these results: (1) the trend for increasing branching complexity with anterior progression through OT areas was likely to be present in a common primate ancestor, and (2) specialization in neuron structure more likely facilitates object recognition than spectral processing.
A longitudinal investigation of imitation, pretend play and mirror self-recognition in human infants
Resumo:
By 24-months of age most children show mirror self-recognition. When surreptitiously marked on their forehead and then presented with a mirror, they explore their own head for the unexpected mark. Here we demonstrate that self-recognition in mirrors does not generalize to other visual feedback. We tested 80 children on mirror and live video versions of the task. Whereas 90% of 24-month olds passed the mirror version, only 35% passed the video version. Seventy percent of 30-month olds showed video selfrecognition and only by age 36-months did the pass rate on the video version reach 90%. It remains to be y 24-months of age most children show mirror self-recognition. When surreptitiously marked on their forehead and then presented with a mirror, they explore their own head for the unexpected mark. Here we demonstrate that self-recognition in mirrors does not generalize to other visual feedback. We tested 80 children on mirror and live video versions of the task. Whereas 90% of 24-month olds passed the mirror version, only 35% passed the video version. Seventy percent of 30-month olds showed video selfrecognition and only by age 36-months did the pass rate on the video version reach 90%. It remains to be
Resumo:
The influence of temporal association on the representation and recognition of objects was investigated. Observers were shown sequences of novel faces in which the identity of the face changed as the head rotated. As a result, observers showed a tendency to treat the views as if they were of the same person. Additional experiments revealed that this was only true if the training sequences depicted head rotations rather than jumbled views; in other words, the sequence had to be spatially as well as temporally smooth. Results suggest that we are continuously associating views of objects to support later recognition, and that we do so not only on the basis of the physical similarity, but also the correlated appearance in time of the objects.
Resumo:
Some motor tasks can be completed, quite literally, with our eyes shut. Most people can touch their nose without looking or reach for an object after only a brief glance at its location. This distinction leads to one of the defining questions of movement control: is information gleaned prior to starting the movement sufficient to complete the task (open loop), or is feedback about the progress of the movement required (closed loop)? One task that has commanded considerable interest in the literature over the years is that of steering a vehicle, in particular lane-correction and lane-changing tasks. Recent work has suggested that this type of task can proceed in a fundamentally open loop manner [1 and 2], with feedback mainly serving to correct minor, accumulating errors. This paper reevaluates the conclusions of these studies by conducting a new set of experiments in a driving simulator. We demonstrate that, in fact, drivers rely on regular visual feedback, even during the well-practiced steering task of lane changing. Without feedback, drivers fail to initiate the return phase of the maneuver, resulting in systematic errors in final heading. The results provide new insight into the control of vehicle heading, suggesting that drivers employ a simple policy of “turn and see,” with only limited understanding of the relationship between steering angle and vehicle heading.
Resumo:
We examined the influence of backrest inclination and vergence demand on the posture and gaze angle that-workers adopt to view visual targets placed in different vertical locations. In the study 12 participants viewed a small video monitor placed in 7 locations around a 0.65-m radius arc (from 650 below to 300 above horizontal eye height). Trunk posture was manipulated by changing the backrest inclination of an adjustable chair. Vergence demand was manipulated by using ophthalmic lenses and prisms to mimic the visual consequences of varying target distance. Changes in vertical target location caused large changes in atlantooccipital posture and gaze angle. Cervical posture was altered to a lesser extent by changes in vertical target location. Participants compensated for changes in backrest inclination by changing cervical posture, though they did not significantly alter atlanto-occipital posture and gaze angle. The posture adopted to view any target represents a compromise between visual and musculoskeletal demands. These results provide support for the argument that the optimal location of visual targets is at least 15 below horizontal eye level. Actual or potential applications of this work include the layout of computer workstations and the viewing of displays from a seated posture.
Resumo:
Spectral peak resolution was investigated in normal hearing (NH), hearing impaired (HI), and cochlear implant (CI) listeners. The task involved discriminating between two rippled noise stimuli in which the frequency positions of the log-spaced peaks and valleys were interchanged. The ripple spacing was varied adaptively from 0.13 to 11.31 ripples/octave, and the minimum ripple spacing at which a reversal in peak and trough positions could be detected was determined as the spectral peak resolution threshold for each listener. Spectral peak resolution was best, on average, in NH listeners, poorest in CI listeners, and intermediate for HI listeners. There was a significant relationship between spectral peak resolution and both vowel and consonant recognition in quiet across the three listener groups. The results indicate that the degree of spectral peak resolution required for accurate vowel and consonant recognition in quiet backgrounds is around 4 ripples/octave, and that spectral peak resolution poorer than around 1–2 ripples/octave may result in highly degraded speech recognition. These results suggest that efforts to improve spectral peak resolution for HI and CI users may lead to improved speech recognition
Resumo:
The purpose of this study was to explore the potential advantages, both theoretical and applied, of preserving low-frequency acoustic hearing in cochlear implant patients. Several hypotheses are presented that predict that residual low-frequency acoustic hearing along with electric stimulation for high frequencies will provide an advantage over traditional long-electrode cochlear implants for the recognition of speech in competing backgrounds. A simulation experiment in normal-hearing subjects demonstrated a clear advantage for preserving low-frequency residual acoustic hearing for speech recognition in a background of other talkers, but not in steady noise. Three subjects with an implanted "short-electrode" cochlear implant and preserved low-frequency acoustic hearing were also tested on speech recognition in the same competing backgrounds and compared to a larger group of traditional cochlear implant users. Each of the three short-electrode subjects performed better than any of the traditional long-electrode implant subjects for speech recognition in a background of other talkers, but not in steady noise, in general agreement with the simulation studies. When compared to a subgroup of traditional implant users matched according to speech recognition ability in quiet, the short-electrode patients showed a 9-dB advantage in the multitalker background. These experiments provide strong preliminary support for retaining residual low-frequency acoustic hearing in cochlear implant patients. The results are consistent with the idea that better perception of voice pitch, which can aid in separating voices in a background of other talkers, was responsible for this advantage.
Resumo:
The purpose of the present study was to examine the benefits of providing audible speech to listeners with sensorineural hearing loss when the speech is presented in a background noise. Previous studies have shown that when listeners have a severe hearing loss in the higher frequencies, providing audible speech (in a quiet background) to these higher frequencies usually results in no improvement in speech recognition. In the present experiments, speech was presented in a background of multitalker babble to listeners with various severities of hearing loss. The signal was low-pass filtered at numerous cutoff frequencies and speech recognition was measured as additional high-frequency speech information was provided to the hearing-impaired listeners. It was found in all cases, regardless of hearing loss or frequency range, that providing audible speech resulted in an increase in recognition score. The change in recognition as the cutoff frequency was increased, along with the amount of audible speech information in each condition (articulation index), was used to calculate the "efficiency" of providing audible speech. Efficiencies were positive for all degrees of hearing loss. However, the gains in recognition were small, and the maximum score obtained by an listener was low, due to the noise background. An analysis of error patterns showed that due to the limited speech audibility in a noise background, even severely impaired listeners used additional speech audibility in the high frequencies to improve their perception of the "easier" features of speech including voicing
Resumo:
Extracting human postural information from video sequences has proved a difficult research question. The most successful approaches to date have been based on particle filtering, whereby the underlying probability distribution is approximated by a set of particles. The shape of the underlying observational probability distribution plays a significant role in determining the success, both accuracy and efficiency, of any visual tracker. In this paper we compare approaches used by other authors and present a cost path approach which is commonly used in image segmentation problems, however is currently not widely used in tracking applications.
Resumo:
We present a fast method for finding optimal parameters for a low-resolution (threading) force field intended to distinguish correct from incorrect folds for a given protein sequence. In contrast to other methods, the parameterization uses information from >10(7) misfolded structures as well as a set of native sequence-structure pairs. In addition to testing the resulting force field's performance on the protein sequence threading problem, results are shown that characterize the number of parameters necessary for effective structure recognition.
Resumo:
It is known that some Virtual Reality (VR) head-mounted displays (HMDs) can cause temporary deficits in binocular vision. On the other hand, the precise mechanism by which visual stress occurs is unclear. This paper is concerned with a potential source of visual stress that has not been previously considered with regard to VR systems: inappropriate vertical gaze angle. As vertical gaze angle is raised or lowered the 'effort' required of the binocular system also changes. The extent to which changes in vertical gaze angle alter the demands placed upon the vergence eye movement system was explored. The results suggested that visual stress may depend, in part, on vertical gaze angle. The proximity of the display screens within an HMD means that a VR headset should be in the correct vertical location for any individual user. This factor may explain some previous empirical results and has important implications for headset design. Fortuitously, a reasonably simple solution exists.