941 resultados para Visual Speaker Recognition, Visual Speech Recognition, Cascading Appearance-Based Features


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Forty students from regular, grade five classes were divided into two groups of twenty, a good reader group and a' poor reader group, on the basis. of their reading scores on Canadian Achievement Tests. .The subjects took. part in four experimental conditions iM which they .learned lists of pronounceable and unprono~nceable pseudowords, some with semantic referents, and responded to questions designed tci test visual perceptu~l learning and lexical ·and semantic association learning. It' was hypothesized "that the good reade~ group would be able to make use of graphemic and phonemic redundancy patterns in order to improv~·visuSl perceptual learning and lexical and semantic association lea~ningto a greater extent. than would .the poor reader gr6up. The data supported this hypothesis, and also indicated that, although the poor readers were less adept at using familiar sound and letter patterns, they were more dependent on· such pa~terns as an aid to visual recognition memory and semantic recall than were the good readers. It wa.s postulated that poor readers are in a double- ~ . bind situatio~ of having to choose between using weak graphemic-semantic associations or gr~pheme-phoneme associations which are also weak and which have hindered them in developing automaticity in. reading.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Psychopathy is associated with well-known characteristics such as a lack of empathy and impulsive behaviour, but it has also been associated with impaired recognition of emotional facial expressions. The use of event-related potentials (ERPs) to examine this phenomenon could shed light on the specific time course and neural activation associated with emotion recognition processes as they relate to psychopathic traits. In the current study we examined the PI , N170, and vertex positive potential (VPP) ERP components and behavioural performance with respect to scores on the Self-Report Psychopathy (SRP-III) questionnaire. Thirty undergraduates completed two tasks, the first of which required the recognition and categorization of affective face stimuli under varying presentation conditions. Happy, angry or fearful faces were presented under with attention directed to the mouth, nose or eye region and varied stimulus exposure duration (30, 75, or 150 ms). We found that behavioural performance to be unrelated to psychopathic personality traits in all conditions, but there was a trend for the Nl70 to peak later in response to fearful and happy facial expressions for individuals high in psychopathic traits. However, the amplitude of the VPP was significantly negatively associated with psychopathic traits, but only in response to stimuli presented under a nose-level fixation. Finally, psychopathic traits were found to be associated with longer N170 latencies in response to stimuli presented under the 30 ms exposure duration. In the second task, participants were required to inhibit processing of irrelevant affective and scrambled face distractors while categorizing unrelated word stimuli as living or nonliving. Psychopathic traits were hypothesized to be positively associated with behavioural performance, as it was proposed that individuals high in psychopathic traits would be less likely to automatically attend to task-irrelevant affective distractors, facilitating word categorization. Thus, decreased interference would be reflected in smaller N170 components, indicating less neural activity associated with processing of distractor faces. We found that overall performance decreased in the presence of angry and fearful distractor faces as psychopathic traits increased. In addition, the amplitude of the N170 decreased and the latency increased in response to affective distractor faces for individuals with higher levels of psychopathic traits. Although we failed to find the predicted behavioural deficit in emotion recognition in Task 1 and facilitation effect in Task 2, the findings of increased N170 and VPP latencies in response to emotional faces are consistent wi th the proposition that abnormal emotion recognition processes may in fact be inherent to psychopathy as a continuous personality trait.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This lexical decision study with eye tracking of Japanese two-kanji-character words investigated the order in which a whole two-character word and its morphographic constituents are activated in the course of lexical access, the relative contributions of the left and the right characters in lexical decision, the depth to which semantic radicals are processed, and how nonlinguistic factors affect lexical processes. Mixed-effects regression analyses of response times and subgaze durations (i.e., first-pass fixation time spent on each of the two characters) revealed joint contributions of morphographic units at all levels of the linguistic structure with the magnitude and the direction of the lexical effects modulated by readers’ locus of attention in a left-to-right preferred processing path. During the early time frame, character effects were larger in magnitude and more robust than radical and whole-word effects, regardless of the font size and the type of nonwords. Extending previous radical-based and character-based models, we propose a task/decision-sensitive character-driven processing model with a level-skipping assumption: Connections from the feature level bypass the lower radical level and link up directly to the higher character level.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L’objectif principal de cette thèse était de quantifier et comparer l’effort requis pour reconnaître la parole dans le bruit chez les jeunes adultes et les personnes aînées ayant une audition normale et une acuité visuelle normale (avec ou sans lentille de correction de la vue). L’effort associé à la perception de la parole est lié aux ressources attentionnelles et cognitives requises pour comprendre la parole. La première étude (Expérience 1) avait pour but d’évaluer l’effort associé à la reconnaissance auditive de la parole (entendre un locuteur), tandis que la deuxième étude (Expérience 2) avait comme but d’évaluer l’effort associé à la reconnaissance auditivo-visuelle de la parole (entendre et voir le visage d’un locuteur). L’effort fut mesuré de deux façons différentes. D’abord par une approche comportementale faisant appel à un paradigme expérimental nommé double tâche. Il s’agissait d’une tâche de reconnaissance de mot jumelée à une tâche de reconnaissance de patrons vibro-tactiles. De plus, l’effort fut quantifié à l’aide d’un questionnaire demandant aux participants de coter l’effort associé aux tâches comportementales. Les deux mesures d’effort furent utilisées dans deux conditions expérimentales différentes : 1) niveau équivalent – c'est-à-dire lorsque le niveau du bruit masquant la parole était le même pour tous les participants et, 2) performance équivalente – c'est-à-dire lorsque le niveau du bruit fut ajusté afin que les performances à la tâche de reconnaissance de mots soient identiques pour les deux groupes de participant. Les niveaux de performance obtenus pour la tâche vibro-tactile ont révélé que les personnes aînées fournissent plus d’effort que les jeunes adultes pour les deux conditions expérimentales, et ce, quelle que soit la modalité perceptuelle dans laquelle les stimuli de la parole sont présentés (c.-à.-d., auditive seulement ou auditivo-visuelle). Globalement, le ‘coût’ associé aux performances de la tâche vibro-tactile était au plus élevé pour les personnes aînées lorsque la parole était présentée en modalité auditivo-visuelle. Alors que les indices visuels peuvent améliorer la reconnaissance auditivo-visuelle de la parole, nos résultats suggèrent qu’ils peuvent aussi créer une charge additionnelle sur les ressources utilisées pour traiter l’information. Cette charge additionnelle a des conséquences néfastes sur les performances aux tâches de reconnaissance de mots et de patrons vibro-tactiles lorsque celles-ci sont effectuées sous des conditions de double tâche. Conformément aux études antérieures, les coefficients de corrélations effectuées à partir des données de l’Expérience 1 et de l’Expérience 2 soutiennent la notion que les mesures comportementales de double tâche et les réponses aux questionnaires évaluent différentes dimensions de l’effort associé à la reconnaissance de la parole. Comme l’effort associé à la perception de la parole repose sur des facteurs auditifs et cognitifs, une troisième étude fut complétée afin d’explorer si la mémoire auditive de travail contribue à expliquer la variance dans les données portant sur l’effort associé à la perception de la parole. De plus, ces analyses ont permis de comparer les patrons de réponses obtenues pour ces deux facteurs après des jeunes adultes et des personnes aînées. Pour les jeunes adultes, les résultats d’une analyse de régression séquentielle ont démontré qu’une mesure de la capacité auditive (taille de l’empan) était reliée à l’effort, tandis qu’une mesure du traitement auditif (rappel alphabétique) était reliée à la précision avec laquelle les mots étaient reconnus lorsqu’ils étaient présentés sous les conditions de double tâche. Cependant, ces mêmes relations n’étaient pas présentes dans les données obtenues pour le groupe de personnes aînées ni dans les données obtenues lorsque les tâches de reconnaissance de la parole étaient effectuées en modalité auditivo-visuelle. D’autres études sont nécessaires pour identifier les facteurs cognitifs qui sous-tendent l’effort associé à la perception de la parole, et ce, particulièrement chez les personnes aînées.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Lors d'une intervention conversationnelle, le langage est supporté par une communication non-verbale qui joue un rôle central dans le comportement social humain en permettant de la rétroaction et en gérant la synchronisation, appuyant ainsi le contenu et la signification du discours. En effet, 55% du message est véhiculé par les expressions faciales, alors que seulement 7% est dû au message linguistique et 38% au paralangage. L'information concernant l'état émotionnel d'une personne est généralement inférée par les attributs faciaux. Cependant, on ne dispose pas vraiment d'instruments de mesure spécifiquement dédiés à ce type de comportements. En vision par ordinateur, on s'intéresse davantage au développement de systèmes d'analyse automatique des expressions faciales prototypiques pour les applications d'interaction homme-machine, d'analyse de vidéos de réunions, de sécurité, et même pour des applications cliniques. Dans la présente recherche, pour appréhender de tels indicateurs observables, nous essayons d'implanter un système capable de construire une source consistante et relativement exhaustive d'informations visuelles, lequel sera capable de distinguer sur un visage les traits et leurs déformations, permettant ainsi de reconnaître la présence ou absence d'une action faciale particulière. Une réflexion sur les techniques recensées nous a amené à explorer deux différentes approches. La première concerne l'aspect apparence dans lequel on se sert de l'orientation des gradients pour dégager une représentation dense des attributs faciaux. Hormis la représentation faciale, la principale difficulté d'un système, qui se veut être général, est la mise en œuvre d'un modèle générique indépendamment de l'identité de la personne, de la géométrie et de la taille des visages. La démarche qu'on propose repose sur l'élaboration d'un référentiel prototypique à partir d'un recalage par SIFT-flow dont on démontre, dans cette thèse, la supériorité par rapport à un alignement conventionnel utilisant la position des yeux. Dans une deuxième approche, on fait appel à un modèle géométrique à travers lequel les primitives faciales sont représentées par un filtrage de Gabor. Motivé par le fait que les expressions faciales sont non seulement ambigües et incohérentes d'une personne à une autre mais aussi dépendantes du contexte lui-même, à travers cette approche, on présente un système personnalisé de reconnaissance d'expressions faciales, dont la performance globale dépend directement de la performance du suivi d'un ensemble de points caractéristiques du visage. Ce suivi est effectué par une forme modifiée d'une technique d'estimation de disparité faisant intervenir la phase de Gabor. Dans cette thèse, on propose une redéfinition de la mesure de confiance et introduisons une procédure itérative et conditionnelle d'estimation du déplacement qui offrent un suivi plus robuste que les méthodes originales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Speech processing and consequent recognition are important areas of Digital Signal Processing since speech allows people to communicate more natu-rally and efficiently. In this work, a speech recognition system is developed for re-cognizing digits in Malayalam. For recognizing speech, features are to be ex-tracted from speech and hence feature extraction method plays an important role in speech recognition. Here, front end processing for extracting the features is per-formed using two wavelet based methods namely Discrete Wavelet Transforms (DWT) and Wavelet Packet Decomposition (WPD). Naive Bayes classifier is used for classification purpose. After classification using Naive Bayes classifier, DWT produced a recognition accuracy of 83.5% and WPD produced an accuracy of 80.7%. This paper is intended to devise a new feature extraction method which produces improvements in the recognition accuracy. So, a new method called Dis-crete Wavelet Packet Decomposition (DWPD) is introduced which utilizes the hy-brid features of both DWT and WPD. The performance of this new approach is evaluated and it produced an improved recognition accuracy of 86.2% along with Naive Bayes classifier.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Performance of any continuous speech recognition system is dependent on the accuracy of its acoustic model. Hence, preparation of a robust and accurate acoustic model lead to satisfactory recognition performance for a speech recognizer. In acoustic modeling of phonetic unit, context information is of prime importance as the phonemes are found to vary according to the place of occurrence in a word. In this paper we compare and evaluate the effect of context dependent tied (CD tied) models, context dependent (CD) and context independent (CI) models in the perspective of continuous speech recognition of Malayalam language. The database for the speech recognition system has utterance from 21 speakers including 11 female and 10 males. Our evaluation results show that CD tied models outperforms CI models over 21%.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Development of Malayalam speech recognition system is in its infancy stage; although many works have been done in other Indian languages. In this paper we present the first work on speaker independent Malayalam isolated speech recognizer based on PLP (Perceptual Linear Predictive) Cepstral Coefficient and Hidden Markov Model (HMM). The performance of the developed system has been evaluated with different number of states of HMM (Hidden Markov Model). The system is trained with 21 male and female speakers in the age group ranging from 19 to 41 years. The system obtained an accuracy of 99.5% with the unseen data

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A key problem in object recognition is selection, namely, the problem of identifying regions in an image within which to start the recognition process, ideally by isolating regions that are likely to come from a single object. Such a selection mechanism has been found to be crucial in reducing the combinatorial search involved in the matching stage of object recognition. Even though selection is of help in recognition, it has largely remained unsolved because of the difficulty in isolating regions belonging to objects under complex imaging conditions involving occlusions, changing illumination, and object appearances. This thesis presents a novel approach to the selection problem by proposing a computational model of visual attentional selection as a paradigm for selection in recognition. In particular, it proposes two modes of attentional selection, namely, attracted and pay attention modes as being appropriate for data and model-driven selection in recognition. An implementation of this model has led to new ways of extracting color, texture and line group information in images, and their subsequent use in isolating areas of the scene likely to contain the model object. Among the specific results in this thesis are: a method of specifying color by perceptual color categories for fast color region segmentation and color-based localization of objects, and a result showing that the recognition of texture patterns on model objects is possible under changes in orientation and occlusions without detailed segmentation. The thesis also presents an evaluation of the proposed model by integrating with a 3D from 2D object recognition system and recording the improvement in performance. These results indicate that attentional selection can significantly overcome the computational bottleneck in object recognition, both due to a reduction in the number of features, and due to a reduction in the number of matches during recognition using the information derived during selection. Finally, these studies have revealed a surprising use of selection, namely, in the partial solution of the pose of a 3D object.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A method is presented for the visual analysis of objects by computer. It is particularly well suited for opaque objects with smoothly curved surfaces. The method extracts information about the object's surface properties, including measures of its specularity, texture, and regularity. It also aids in determining the object's shape. The application of this method to a simple recognition task ??e recognition of fruit ?? discussed. The results on a more complex smoothly curved object, a human face, are also considered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A persistent issue of debate in the area of 3D object recognition concerns the nature of the experientially acquired object models in the primate visual system. One prominent proposal in this regard has expounded the use of object centered models, such as representations of the objects' 3D structures in a coordinate frame independent of the viewing parameters [Marr and Nishihara, 1978]. In contrast to this is another proposal which suggests that the viewing parameters encountered during the learning phase might be inextricably linked to subsequent performance on a recognition task [Tarr and Pinker, 1989; Poggio and Edelman, 1990]. The 'object model', according to this idea, is simply a collection of the sample views encountered during training. Given that object centered recognition strategies have the attractive feature of leading to viewpoint independence, they have garnered much of the research effort in the field of computational vision. Furthermore, since human recognition performance seems remarkably robust in the face of imaging variations [Ellis et al., 1989], it has often been implicitly assumed that the visual system employs an object centered strategy. In the present study we examine this assumption more closely. Our experimental results with a class of novel 3D structures strongly suggest the use of a view-based strategy by the human visual system even when it has the opportunity of constructing and using object-centered models. In fact, for our chosen class of objects, the results seem to support a stronger claim: 3D object recognition is 2D view-based.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper sketches a hypothetical cortical architecture for visual 3D object recognition based on a recent computational model. The view-centered scheme relies on modules for learning from examples, such as Hyperbf-like networks. Such models capture a class of explanations we call Memory-Based Models (MBM) that contains sparse population coding, memory-based recognition, and codebooks of prototypes. Unlike the sigmoidal units of some artificial neural networks, the units of MBMs are consistent with the description of cortical neurons. We describe how an example of MBM may be realized in terms of cortical circuitry and biophysical mechanisms, consistent with psychophysical and physiological data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Understanding how biological visual systems perform object recognition is one of the ultimate goals in computational neuroscience. Among the biological models of recognition the main distinctions are between feedforward and feedback and between object-centered and view-centered. From a computational viewpoint the different recognition tasks - for instance categorization and identification - are very similar, representing different trade-offs between specificity and invariance. Thus the different tasks do not strictly require different classes of models. The focus of the review is on feedforward, view-based models that are supported by psychophysical and physiological data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The visual recognition of complex movements and actions is crucial for communication and survival in many species. Remarkable sensitivity and robustness of biological motion perception have been demonstrated in psychophysical experiments. In recent years, neurons and cortical areas involved in action recognition have been identified in neurophysiological and imaging studies. However, the detailed neural mechanisms that underlie the recognition of such complex movement patterns remain largely unknown. This paper reviews the experimental results and summarizes them in terms of a biologically plausible neural model. The model is based on the key assumption that action recognition is based on learned prototypical patterns and exploits information from the ventral and the dorsal pathway. The model makes specific predictions that motivate new experiments.