994 resultados para LIP-READING


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three experiments investigated irrelevant sound interference of lip-read lists. In Experiment 1, an acoustically changing sequence of nine irrelevant utterances was more disruptive to spoken immediate identification of lists of nine lip-read digits than nine repetitions of the same utterances (the changing-state effect; Jones, Madden, & Miles, 1992). Experiment 2 replicated this finding when lip-read items were sampled with replacement from the nine digits to form the lip-read lists. In Experiment 3, when the irrelevant sound was confined to the retention interval of a delayed recall task, a changing-state pattern of disruption also occurred. Results confirm a changing-state effect in memory for lip-read items but also point to the possibility that, for lip-reading, changing-state effects may occur at an earlier, perceptual stage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Investigates the use of lip information, in conjunction with speech information, for robust speaker verification in the presence of background noise. We have previously shown (Int. Conf. on Acoustics, Speech and Signal Proc., vol. 6, pp. 3693-3696, May 1998) that features extracted from a speaker's moving lips hold speaker dependencies which are complementary with speech features. We demonstrate that the fusion of lip and speech information allows for a highly robust speaker verification system which outperforms either subsystem individually. We present a new technique for determining the weighting to be applied to each modality so as to optimize the performance of the fused system. Given a correct weighting, lip information is shown to be highly effective for reducing the false acceptance and false rejection error rates in the presence of background noise

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Audio-visualspeechrecognition, or the combination of visual lip-reading with traditional acoustic speechrecognition, has been previously shown to provide a considerable improvement over acoustic-only approaches in noisy environments, such as that present in an automotive cabin. The research presented in this paper will extend upon the established audio-visualspeechrecognition literature to show that further improvements in speechrecognition accuracy can be obtained when multiple frontal or near-frontal views of a speaker's face are available. A series of visualspeechrecognition experiments using a four-stream visual synchronous hidden Markov model (SHMM) are conducted on the four-camera AVICAR automotiveaudio-visualspeech database. We study the relative contribution between the side and central orientated cameras in improving visualspeechrecognition accuracy. Finally combination of the four visual streams with a single audio stream in a five-stream SHMM demonstrates a relative improvement of over 56% in word recognition accuracy when compared to the acoustic-only approach in the noisiest conditions of the AVICAR database.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Speech has both auditory and visual components (heard speech sounds and seen articulatory gestures). During all perception, selective attention facilitates efficient information processing and enables concentration on high-priority stimuli. Auditory and visual sensory systems interact at multiple processing levels during speech perception and, further, the classical motor speech regions seem also to participate in speech perception. Auditory, visual, and motor-articulatory processes may thus work in parallel during speech perception, their use possibly depending on the information available and the individual characteristics of the observer. Because of their subtle speech perception difficulties possibly stemming from disturbances at elemental levels of sensory processing, dyslexic readers may rely more on motor-articulatory speech perception strategies than do fluent readers. This thesis aimed to investigate the neural mechanisms of speech perception and selective attention in fluent and dyslexic readers. We conducted four functional magnetic resonance imaging experiments, during which subjects perceived articulatory gestures, speech sounds, and other auditory and visual stimuli. Gradient echo-planar images depicting blood oxygenation level-dependent contrast were acquired during stimulus presentation to indirectly measure brain hemodynamic activation. Lip-reading activated the primary auditory cortex, and selective attention to visual speech gestures enhanced activity within the left secondary auditory cortex. Attention to non-speech sounds enhanced auditory cortex activity bilaterally; this effect showed modulation by sound presentation rate. A comparison between fluent and dyslexic readers' brain hemodynamic activity during audiovisual speech perception revealed stronger activation of predominantly motor speech areas in dyslexic readers during a contrast test that allowed exploration of the processing of phonetic features extracted from auditory and visual speech. The results show that visual speech perception modulates hemodynamic activity within auditory cortex areas once considered unimodal, and suggest that the left secondary auditory cortex specifically participates in extracting the linguistic content of seen articulatory gestures. They are strong evidence for the importance of attention as a modulator of auditory cortex function during both sound processing and visual speech perception, and point out the nature of attention as an interactive process (influenced by stimulus-driven effects). Further, they suggest heightened reliance on motor-articulatory and visual speech perception strategies among dyslexic readers, possibly compensating for their auditory speech perception difficulties.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective The objective of this study was to learn about the psychosocial well-being and life management of Finnish adults with late deafness or hearing loss and to observe the effectiveness of the rehabilitation courses they participated in. Methods For my study I used indicators which were suitable for the evaluation of life management and psychosocial well-being of late-deafened adults. The first part of the study was conducted during 2009 as a questionnaire on three rehabilitation courses in Kopola, a course center of the Finnish Federation of Hard of Hearing. The follow-up study was done at the third period of the courses during 2009 2010. The questionnaire contained both open and structured questions. The questionnaire consisted of five areas concerning life management and psychosocial well-being: sense of coherence (life management), human relations and social support, mood, self-esteem and satisfaction with life. I also asked the participants to reflect on their experiences of group rehabilitation. Results and conclusions The participants consisted of seven women and three men. They were approximately 63 years old and were all retired. Loss of hearing was described to have affected their social life, free time, and in general made their lives more difficult. From the course the participants hoped to gain new skills such as signed speech and lip-reading, uplift their mood, accept their loss of hearing and experience peer support. After the courses they replied that they had more close relations with whom they also were a little more in contact with. More participants were satisfied with e.g. their ability to take care of themselves, their free time, financial situation, family life, mental resources and physical shape. Majority of the participants showed symptoms of depression when the courses started, but at the end of the courses these signs had moderated or disappeared for most of them. The participants felt that during the rehabilitation they had been heard, respected, accepted and been taken care of. The course provided the possibility for confiding, and the discussions gave the participants support and consolidation. In conclusion, the course affected positively on the acclimatization to the hearing loss and the empowerment of the participants. The results of this study can be utilized in disability services, the development of rehabilitation and in the social- and health services of senior citizens.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A novel method for 3D head tracking in the presence of large head rotations and facial expression changes is described. Tracking is formulated in terms of color image registration in the texture map of a 3D surface model. Model appearance is recursively updated via image mosaicking in the texture map as the head orientation varies. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. Parameters are estimated via a robust minimization procedure; this provides robustness to occlusions, wrinkles, shadows, and specular highlights. The system was tested on a variety of sequences taken with low quality, uncalibrated video cameras. Experimental results are reported.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An improved technique for 3D head tracking under varying illumination conditions is proposed. The head is modeled as a texture mapped cylinder. Tracking is formulated as an image registration problem in the cylinder's texture map image. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. To solve the registration problem in the presence of lighting variation and head motion, the residual error of registration is modeled as a linear combination of texture warping templates and orthogonal illumination templates. Fast and stable on-line tracking is achieved via regularized, weighted least squares minimization of the registration error. The regularization term tends to limit potential ambiguities that arise in the warping and illumination templates. It enables stable tracking over extended sequences. Tracking does not require a precise initial fit of the model; the system is initialized automatically using a simple 2D face detector. The only assumption is that the target is facing the camera in the first frame of the sequence. The formulation is tailored to take advantage of texture mapping hardware available in many workstations, PC's, and game consoles. The non-optimized implementation runs at about 15 frames per second on a SGI O2 graphic workstation. Extensive experiments evaluating the effectiveness of the formulation are reported. The sensitivity of the technique to illumination, regularization parameters, errors in the initial positioning and internal camera parameters are analyzed. Examples and applications of tracking are reported.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For the first time in this paper we present results showing the effect of speaker head pose angle on automatic lip-reading performance over a wide range of closely spaced angles. We analyse the effect head pose has upon the features themselves and show that by selecting coefficients with minimum variance w.r.t. pose angle, recognition performance can be improved when train-test pose angles differ. Experiments are conducted using the initial phase of a unique multi view Audio-Visual database designed specifically for research and development of pose-invariant lip-reading systems. We firstly show that it is the higher order horizontal spatial frequency components that become most detrimental as the pose deviates. Secondly we assess the performance of different feature selection masks across a range of pose angles including a new mask based on Minimum Cross-Pose Variance coefficients. We report a relative improvement of 50% in Word Error Rate when using our selection mask over a common energy based selection during profile view lip-reading.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

abstract With many visual speech animation techniques now available, there is a clear need for systematic perceptual evaluation schemes. We describe here our scheme and its application to a new video-realistic (potentially indistinguishable from real recorded video) visual-speech animation system, called Mary 101. Two types of experiments were performed: a) distinguishing visually between real and synthetic image- sequences of the same utterances, ("Turing tests") and b) gauging visual speech recognition by comparing lip-reading performance of the real and synthetic image-sequences of the same utterances ("Intelligibility tests"). Subjects that were presented randomly with either real or synthetic image-sequences could not tell the synthetic from the real sequences above chance level. The same subjects when asked to lip-read the utterances from the same image-sequences recognized speech from real image-sequences significantly better than from synthetic ones. However, performance for both, real and synthetic, were at levels suggested in the literature on lip-reading. We conclude from the two experiments that the animation of Mary 101 is adequate for providing a percept of a talking head. However, additional effort is required to improve the animation for lip-reading purposes like rehabilitation and language learning. In addition, these two tasks could be considered as explicit and implicit perceptual discrimination tasks. In the explicit task (a), each stimulus is classified directly as a synthetic or real image-sequence by detecting a possible difference between the synthetic and the real image-sequences. The implicit perceptual discrimination task (b) consists of a comparison between visual recognition of speech of real and synthetic image-sequences. Our results suggest that implicit perceptual discrimination is a more sensitive method for discrimination between synthetic and real image-sequences than explicit perceptual discrimination.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper reviews a study to examine the effects on lip reading performance of word position within a sentence.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conclusion: The extended retrolabyrinthine approach (RLA) is a safe and reliable approach for auditory brainstem placement in children. The surgical landmarks to reach cochlear nucleus are adequately exposed by this approach. Objective: To describe a new approach option for auditory brainstem implants (ABIs) in children, highlighting the anatomical landmarks to appropriately expose the foramen of Luschka. Methods: Three prelingually deafened children consecutively operated for ABIs via the RLA. Results: ABI placement via the RLA was successfully performed in all children without any further complications except multidirectional nystagmus in one child. The RLA we employed differed from that used for vestibular schwannoma only in the removal of the posterior semicircular canal. The lateral and superior semicircular canals and the vestibule remained intact, and there was no need to expose the dura of the internal auditory meatus. The jugular bulb was completely exposed to allow adequate visualization of the ninth cranial nerve and cerebellar flocculus.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The cochlear implant (CI) is one of the most successful neural prostheses developed to date. It offers artificial hearing to individuals with profound sensorineural hearing loss and with insufficient benefit from conventional hearing aids. The first implants available some 30 years ago provided a limited sensation of sound. The benefit for users of these early systems was mostly a facilitation of lip-reading based communication rather than an understanding of speech. Considerable progress has been made since then. Modern, multichannel implant systems feature complex speech processing strategies, high stimulation rates and multiple sites of stimulation in the cochlea. Equipped with such a state-of-the-art system, the majority of recipients today can communicate orally without visual cues and can even use the telephone. The impact of CIs on deaf individuals and on the deaf community has thus been exceptional. To date, more than 300,000 patients worldwide have received CIs. In Switzerland, the first implantation was performed in 1977 and, as of 2012, over 2,000 systems have been implanted with a current rate of around 150 CIs per year. The primary purpose of this article is to provide a contemporary overview of cochlear implantation, emphasising the situation in Switzerland.