28 resultados para Audio-Visual Automatic Speech Recognition


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Between 8 and 40% of Parkinson disease (PD) patients will have visual hallucinations (VHs) during the course of their illness. Although cognitive impairment has been identified as a risk factor for hallucinations, more specific neuropsychological deficits underlying such phenomena have not been established. Research in psychopathology has converged to suggest that hallucinations are associated with confusion between internal representations of events and real events (i.e. impaired-source monitoring). We evaluated three groups: 17 Parkinson's patients with visual hallucinations, 20 Parkinson's patients without hallucinations and 20 age-matched controls, using tests of visual imagery, visual perception and memory, including tests of source monitoring and recollective experience. The study revealed that Parkinson's patients with hallucinations appear to have intact visual imagery processes and spatial perception. However, there were impairments in object perception and recognition memory, and poor recollection of the encoding episode in comparison to both non-hallucinating Parkinson's patients and healthy controls. Errors were especially likely to occur when encoding and retrieval cues were in different modalities. The findings raise the possibility that visual hallucinations in Parkinson's patients could stem from a combination of faulty perceptual processing of environmental stimuli, and less detailed recollection of experience combined with intact image generation. (C) 2002 Elsevier Science Ltd. All fights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Frequency recognition is an important task in many engineering fields such as audio signal processing and telecommunications engineering, for example in applications like Dual-Tone Multi-Frequency (DTMF) detection or the recognition of the carrier frequency of a Global Positioning, System (GPS) signal. This paper will present results of investigations on several common Fourier Transform-based frequency recognition algorithms implemented in real time on a Texas Instruments (TI) TMS320C6713 Digital Signal Processor (DSP) core. In addition, suitable metrics are going to be evaluated in order to ascertain which of these selected algorithms is appropriate for audio signal processing(1).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large volume of visual content is inaccessible until effective and efficient indexing and retrieval of such data is achieved. In this paper, we introduce the DREAM system, which is a knowledge-assisted semantic-driven context-aware visual information retrieval system applied in the film post production domain. We mainly focus on the automatic labelling and topic map related aspects of the framework. The use of the context- related collateral knowledge, represented by a novel probabilistic based visual keyword co-occurrence matrix, had been proven effective via the experiments conducted during system evaluation. The automatically generated semantic labels were fed into the Topic Map Engine which can automatically construct ontological networks using Topic Maps technology, which dramatically enhances the indexing and retrieval performance of the system towards an even higher semantic level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i) motion detection using a layered background model, (ii) object tracking based on local appearance, (iii) hierarchical object recognition, and (iv) fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed. Copyright (C) 2007 Hindawi Publishing Corporation. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied in frequency (low/high) and number of phonological onset neighbors (low/high density). Adolescents with ALI required more speech input to initially identify low-frequency words with low competitor density than those with SLI and those with TLD, who did not differ. These differences may be due to less well specified word form representations in ALI.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of new and newly improved methods for predicting protein structure developed by the Jones–University College London group were used to make predictions for the CASP6 experiment. Structures were predicted with a combination of fold recognition methods (mGenTHREADER, nFOLD, and THREADER) and a substantially enhanced version of FRAGFOLD, our fragment assembly method. Attempts at automatic domain parsing were made using DomPred and DomSSEA, which are based on a secondary structure parsing algorithm and additionally for DomPred, a simple local sequence alignment scoring function. Disorder prediction was carried out using a new SVM-based version of DISOPRED. Attempts were also made at domain docking and “microdomain” folding in order to build complete chain models for some targets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Word deafness is a rare condition where pathologically degraded speech perception results in impaired repetition and comprehension but otherwise intact linguistic skills. Although impaired linguistic systems in aphasias resulting from damage to the neural language system (here termed central impairments), have been consistently shown to be amenable to external influences such as linguistic or contextual information (e.g. cueing effects in naming), it is not known whether similar influences can be shown for aphasia arising from damage to a perceptual system (here termed peripheral impairments). Aims: This study aimed to investigate the extent to which pathologically degraded speech perception could be facilitated or disrupted by providing visual as well as auditory information. Methods and Procedures: In three word repetition tasks, the participant with word deafness (AB) repeated words under different conditions: words were repeated in the context of a pictorial or written target, a distractor (semantic, unrelated, rhyme or phonological neighbour) or a blank page (nothing). Accuracy and error types were analysed. Results: AB was impaired at repetition in the blank condition, confirming her degraded speech perception. Repetition was significantly facilitated when accompanied by a picture or written example of the word and significantly impaired by the presence of a written rhyme. Errors in the blank condition were primarily formal whereas errors in the rhyme condition were primarily miscues (saying the distractor word rather than the target). Conclusions: Cross-modal input can both facilitate and further disrupt repetition in word deafness. The cognitive mechanisms behind these findings are discussed. Both top-down influence from the lexical layer on perceptual processes as well as intra-lexical competition within the lexical layer may play a role.  

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Facial expression recognition was investigated in 20 males with high functioning autism (HFA) or Asperger syndrome (AS), compared to typically developing individuals matched for chronological age (TD CA group) and verbal and non-verbal ability (TD V/NV group). This was the first study to employ a visual search, “face in the crowd” paradigm with a HFA/AS group, which explored responses to numerous facial expressions using real-face stimuli. Results showed slower response times for processing fear, anger and sad expressions in the HFA/AS group, relative to the TD CA group, but not the TD V/NV group. Reponses to happy, disgust and surprise expressions showed no group differences. Results are discussed with reference to the amygdala theory of autism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a method for the recognition of complex actions. Our method combines automatic learning of simple actions and manual definition of complex actions in a single grammar. Contrary to the general trend in complex action recognition that consists in dividing recognition into two stages, our method performs recognition of simple and complex actions in a unified way. This is performed by encoding simple action HMMs within the stochastic grammar that models complex actions. This unified approach enables a more effective influence of the higher activity layers into the recognition of simple actions which leads to a substantial improvement in the classification of complex actions. We consider the recognition of complex actions based on person transits between areas in the scene. As input, our method receives crossings of tracks along a set of zones which are derived using unsupervised learning of the movement patterns of the objects in the scene. We evaluate our method on a large dataset showing normal, suspicious and threat behaviour on a parking lot. Experiments show an improvement of ~ 30% in the recognition of both high-level scenarios and their composing simple actions with respect to a two-stage approach. Experiments with synthetic noise simulating the most common tracking failures show that our method only experiences a limited decrease in performance when moderate amounts of noise are added.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Atypical self-processing is an emerging theme in autism research, suggested by lower self-reference effect in memory, and atypical neural responses to visual self-representations. Most research on physical self-processing in autism uses visual stimuli. However, the self is a multimodal construct, and therefore, it is essential to test self-recognition in other sensory modalities as well. Self-recognition in the auditory modality remains relatively unexplored and has not been tested in relation to autism and related traits. This study investigates self-recognition in auditory and visual domain in the general population and tests if it is associated with autistic traits. Methods Thirty-nine neurotypical adults participated in a two-part study. In the first session, individual participant’s voice was recorded and face was photographed and morphed respectively with voices and faces from unfamiliar identities. In the second session, participants performed a ‘self-identification’ task, classifying each morph as ‘self’ voice (or face) or an ‘other’ voice (or face). All participants also completed the Autism Spectrum Quotient (AQ). For each sensory modality, slope of the self-recognition curve was used as individual self-recognition metric. These two self-recognition metrics were tested for association between each other, and with autistic traits. Results Fifty percent ‘self’ response was reached for a higher percentage of self in the auditory domain compared to the visual domain (t = 3.142; P < 0.01). No significant correlation was noted between self-recognition bias across sensory modalities (τ = −0.165, P = 0.204). Higher recognition bias for self-voice was observed in individuals higher in autistic traits (τ AQ = 0.301, P = 0.008). No such correlation was observed between recognition bias for self-face and autistic traits (τ AQ = −0.020, P = 0.438). Conclusions Our data shows that recognition bias for physical self-representation is not related across sensory modalities. Further, individuals with higher autistic traits were better able to discriminate self from other voices, but this relation was not observed with self-face. A narrow self-other overlap in the auditory domain seen in individuals with high autistic traits could arise due to enhanced perceptual processing of auditory stimuli often observed in individuals with autism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What this paper adds? What is already known on the subject? Multi-sensory treatment approaches have been shown to impact outcome measures positively, such as accuracy of speech movement patterns and speech intelligibility in adults with motor speech disorders, as well as in children with apraxia of speech, autism and cerebral palsy. However, there has been no empirical study using multi-sensory treatment for children with speech sound disorders (SSDs) who demonstrate motor control issues in the jaw and orofacial structures (e.g. jaw sliding, jaw over extension, inadequate lip rounding/retraction and decreased integration of speech movements). What this paper adds? Findings from this study indicate that, for speech production disorders where both the planning and production of spatiotemporal parameters of movement sequences for speech are disrupted, multi-sensory treatment programmes that integrate auditory, visual and tactile–kinesthetic information improve auditory and visual accuracy of speech production. The training (practised in treatment) and test words (not practised in treatment) both demonstrated positive change in most participants, indicating generalization of target features to untrained words. It is inferred that treatment that focuses on integrating multi-sensory information and normalizing parameters of speech movements is an effective method for treating children with SSDs who demonstrate speech motor control issues.