894 resultados para FACE RECOGNITION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A persistent issue of debate in the area of 3D object recognition concerns the nature of the experientially acquired object models in the primate visual system. One prominent proposal in this regard has expounded the use of object centered models, such as representations of the objects' 3D structures in a coordinate frame independent of the viewing parameters [Marr and Nishihara, 1978]. In contrast to this is another proposal which suggests that the viewing parameters encountered during the learning phase might be inextricably linked to subsequent performance on a recognition task [Tarr and Pinker, 1989; Poggio and Edelman, 1990]. The 'object model', according to this idea, is simply a collection of the sample views encountered during training. Given that object centered recognition strategies have the attractive feature of leading to viewpoint independence, they have garnered much of the research effort in the field of computational vision. Furthermore, since human recognition performance seems remarkably robust in the face of imaging variations [Ellis et al., 1989], it has often been implicitly assumed that the visual system employs an object centered strategy. In the present study we examine this assumption more closely. Our experimental results with a class of novel 3D structures strongly suggest the use of a view-based strategy by the human visual system even when it has the opportunity of constructing and using object-centered models. In fact, for our chosen class of objects, the results seem to support a stronger claim: 3D object recognition is 2D view-based.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding how the human visual system recognizes objects is one of the key challenges in neuroscience. Inspired by a large body of physiological evidence (Felleman and Van Essen, 1991; Hubel and Wiesel, 1962; Livingstone and Hubel, 1988; Tso et al., 2001; Zeki, 1993), a general class of recognition models has emerged which is based on a hierarchical organization of visual processing, with succeeding stages being sensitive to image features of increasing complexity (Hummel and Biederman, 1992; Riesenhuber and Poggio, 1999; Selfridge, 1959). However, these models appear to be incompatible with some well-known psychophysical results. Prominent among these are experiments investigating recognition impairments caused by vertical inversion of images, especially those of faces. It has been reported that faces that differ "featurally" are much easier to distinguish when inverted than those that differ "configurally" (Freire et al., 2000; Le Grand et al., 2001; Mondloch et al., 2002) ??finding that is difficult to reconcile with the aforementioned models. Here we show that after controlling for subjects' expectations, there is no difference between "featurally" and "configurally" transformed faces in terms of inversion effect. This result reinforces the plausibility of simple hierarchical models of object representation and recognition in cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The perceptive accuracy of university students was compared between men and women, from sciences and humanities courses, to recognize emotional facial expressions. emotional expressions have had increased interest in several areas involved with human interaction, reflecting the importance of perceptive skills in human expression of emotions for the effectiveness of communication. Two tests were taken: one was a quick exposure (0.5 s) of 12 faces with an emotional expression, followed by a neutral face. subjects had to tell if happiness, sadness, anger, fear, disgust or surprise was flashed, and each emotion was shown twice, at random. on the second test 15 faces with the combination of two emotional expressions were shown without a time limit, and the subject had to name one of the emotions of the previous list. in this study, women perceived sad expressions better while men realized more happy faces. there was no significant difference in other emotions detection like anger, fear, surprise, disgust. Students of humanities and sciences areas of both sexes, when compared, had similar capacities to perceive emotional expressions

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of multiple sclerosis (MS) on the ability to identify emotional expressions in faces was investigated, and possible associations with patients’ characteristics were explored. 56 non-demented MS patients and 56 healthy subjects (HS) with similar demographic characteristics performed an emotion recognition task (ERT), the Benton Facial Recognition Test (BFRT), and answered the Hospital Anxiety and Depression Scale (HADS). Additionally, MS patients underwent a neurological examination and a comprehensive neuropsychological evaluation. The ERT consisted of 42 pictures of faces (depicting anger, disgust, fear, happiness, sadness, surprise and neutral expressions) from the NimStim set. An iViewX high-speed eye tracker was used to record eye movements during ERT. The fixation times were calculated for two regions of interest (i.e., eyes and rest of the face). No significant differences were found between MS and HC on ERT’s behavioral and oculomotor measures. Bivariate and multiple regression analyses revealed significant associations between ERT’s behavioral performance and demographic, clinical, psychopathological, and cognitive measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new face verification algorithm based on Gabor wavelets and AdaBoost. In the algorithm, faces are represented by Gabor wavelet features generated by Gabor wavelet transform. Gabor wavelets with 5 scales and 8 orientations are chosen to form a family of Gabor wavelets. By convolving face images with these 40 Gabor wavelets, the original images are transformed into magnitude response images of Gabor wavelet features. The AdaBoost algorithm selects a small set of significant features from the pool of the Gabor wavelet features. Each feature is the basis for a weak classifier which is trained with face images taken from the XM2VTS database. The feature with the lowest classification error is selected in each iteration of the AdaBoost operation. We also address issues regarding computational costs in feature selection with AdaBoost. A support vector machine (SVM) is trained with examples of 20 features, and the results have shown a low false positive rate and a low classification error rate in face verification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Facial expression recognition was investigated in 20 males with high functioning autism (HFA) or Asperger syndrome (AS), compared to typically developing individuals matched for chronological age (TD CA group) and verbal and non-verbal ability (TD V/NV group). This was the first study to employ a visual search, “face in the crowd” paradigm with a HFA/AS group, which explored responses to numerous facial expressions using real-face stimuli. Results showed slower response times for processing fear, anger and sad expressions in the HFA/AS group, relative to the TD CA group, but not the TD V/NV group. Reponses to happy, disgust and surprise expressions showed no group differences. Results are discussed with reference to the amygdala theory of autism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Atypical self-processing is an emerging theme in autism research, suggested by lower self-reference effect in memory, and atypical neural responses to visual self-representations. Most research on physical self-processing in autism uses visual stimuli. However, the self is a multimodal construct, and therefore, it is essential to test self-recognition in other sensory modalities as well. Self-recognition in the auditory modality remains relatively unexplored and has not been tested in relation to autism and related traits. This study investigates self-recognition in auditory and visual domain in the general population and tests if it is associated with autistic traits. Methods Thirty-nine neurotypical adults participated in a two-part study. In the first session, individual participant’s voice was recorded and face was photographed and morphed respectively with voices and faces from unfamiliar identities. In the second session, participants performed a ‘self-identification’ task, classifying each morph as ‘self’ voice (or face) or an ‘other’ voice (or face). All participants also completed the Autism Spectrum Quotient (AQ). For each sensory modality, slope of the self-recognition curve was used as individual self-recognition metric. These two self-recognition metrics were tested for association between each other, and with autistic traits. Results Fifty percent ‘self’ response was reached for a higher percentage of self in the auditory domain compared to the visual domain (t = 3.142; P < 0.01). No significant correlation was noted between self-recognition bias across sensory modalities (τ = −0.165, P = 0.204). Higher recognition bias for self-voice was observed in individuals higher in autistic traits (τ AQ = 0.301, P = 0.008). No such correlation was observed between recognition bias for self-face and autistic traits (τ AQ = −0.020, P = 0.438). Conclusions Our data shows that recognition bias for physical self-representation is not related across sensory modalities. Further, individuals with higher autistic traits were better able to discriminate self from other voices, but this relation was not observed with self-face. A narrow self-other overlap in the auditory domain seen in individuals with high autistic traits could arise due to enhanced perceptual processing of auditory stimuli often observed in individuals with autism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Children with callous-unemotional (CU) traits, a proposed precursor to adult psychopathy, are characterized by impaired emotion recognition, reduced responsiveness to others’ distress, and a lack of guilt or empathy. Reduced attention to faces, and more specifically to the eye region, has been proposed to underlie these difficulties, although this has never been tested longitudinally from infancy. Attention to faces occurs within the context of dyadic caregiver interactions, and early environment including parenting characteristics has been associated with CU traits. The present study tested whether infants’ preferential tracking of a face with direct gaze and levels of maternal sensitivity predict later CU traits. Methods Data were analyzed from a stratified random sample of 213 participants drawn from a population-based sample of 1233 first-time mothers. Infants’ preferential face tracking at 5 weeks and maternal sensitivity at 29 weeks were entered into a weighted linear regression as predictors of CU traits at 2.5 years. Results Controlling for a range of confounders (e.g., deprivation), lower preferential face tracking predicted higher CU traits (p = .001). Higher maternal sensitivity predicted lower CU traits in girls (p = .009), but not boys. No significant interaction between face tracking and maternal sensitivity was found. Conclusions This is the first study to show that attention to social features during infancy as well as early sensitive parenting predict the subsequent development of CU traits. Identifying such early atypicalities offers the potential for developing parent-mediated interventions in children at risk for developing CU traits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Periocular recognition has recently become an active topic in biometrics. Typically it uses 2D image data of the periocular region. This paper is the first description of combining 3D shape structure with 2D texture. A simple and effective technique using iterative closest point (ICP) was applied for 3D periocular region matching. It proved its strength for relatively unconstrained eye region capture, and does not require any training. Local binary patterns (LBP) were applied for 2D image based periocular matching. The two modalities were combined at the score-level. This approach was evaluated using the Bosphorus 3D face database, which contains large variations in facial expressions, head poses and occlusions. The rank-1 accuracy achieved from the 3D data (80%) was better than that for 2D (58%), and the best accuracy (83%) was achieved by fusing the two types of data. This suggests that significant improvements to periocular recognition systems could be achieved using the 3D structure information that is now available from small and inexpensive sensors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anti-spoofing is attracting growing interest in biometrics, considering the variety of fake materials and new means to attack biometric recognition systems. New unseen materials continuously challenge state-of-the-art spoofing detectors, suggesting for additional systematic approaches to target anti-spoofing. By incorporating liveness scores into the biometric fusion process, recognition accuracy can be enhanced, but traditional sum-rule based fusion algorithms are known to be highly sensitive to single spoofed instances. This paper investigates 1-median filtering as a spoofing-resistant generalised alternative to the sum-rule targeting the problem of partial multibiometric spoofing where m out of n biometric sources to be combined are attacked. Augmenting previous work, this paper investigates the dynamic detection and rejection of livenessrecognition pair outliers for spoofed samples in true multi-modal configuration with its inherent challenge of normalisation. As a further contribution, bootstrap aggregating (bagging) classifiers for fingerprint spoof-detection algorithm is presented. Experiments on the latest face video databases (Idiap Replay- Attack Database and CASIA Face Anti-Spoofing Database), and fingerprint spoofing database (Fingerprint Liveness Detection Competition 2013) illustrate the efficiency of proposed techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Child facial cues evoke attention, parental care behaviors and modulate for infant- caretaker interactions. Lorenz described the baby schema ( Kindchenschema ) as a set of infantile physical features such as the large head, round face, high and protruding forehead, big eyes, chubby cheeks, small nose and mouth. Previous work on this fundamental concept was restricted to positive perception to infant face, and did not show consistent results about the development individuals perceptions, regarding the physical attributes that worked as markers of cuteness. Here, we experimentally tested the effects of baby schema on the perception of cuteness of infant faces by children and adults. We used 60 none graphically manipulated photos of different stimulus children faces from 4 to 9 years old. In the first task for the adults experimental subjects, ten stimulus photos were shown, whereas for children experimental subjects, four stimulus photos were shown at a time, with a total of six rounds. The second task involved only adults, who indicated the motivation of affective behaviors and care directed to children through a Likert scale. Our results suggest that both participants judged similarly the cuteness of children's faces, and the physical features markers of this perception were observed only for younger stimulus children. Adults have attributed more motivations of positive behaviors to cuter stimulus children. The recognition of the baby schema by individuals of different ages and genders confers the universality and power of children's physical attributes. From the evolutionary perspective the responsiveness to baby schema is significant to ensure aloparental and parental investment, and the consequent children survival

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Gender recognition has achieved impressive results based on the face appearance in controlled datasets. Its application in the wild and large datasets is still a challenging task for researchers. In this paper, we make use of classical techniques to analyze their performance in controlled and uncontrolled condition respectively with the LFW and MORPH datasets. For both sets the benchmarking protocol follows the 5-fold cross-validation proposed by the BEFIT challenge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]In this paper, we experimentally study the combination of face and facial feature detectors to improve face detection performance. The face detection problem, as suggeted by recent face detection challenges, is still not solved. Face detectors traditionally fail in large-scale problems and/or when the face is occluded or di erent head rotations are present. The combination of face and facial feature detectors is evaluated with a public database. The obtained results evidence an improvement in the positive detection rate while reducing the false detection rate. Additionally, we prove that the integration of facial feature detectors provides useful information for pose estimation and face alignment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most consistent findings in the neuroscience of autism is hypoactivation of the fusiform gyrus (FG) during face processing. In this study the authors examined whether successful facial affect recognition training is associated with an increased activation of the FG in autism. The effect of a computer-based program to teach facial affect identification was examined in 10 individuals with high-functioning autism. Blood oxygenation level-dependent (BOLD) functional magnetic resonance imaging (fMRI) changes in the FG and other regions of interest, as well as behavioral facial affect recognition measures, were assessed pre- and posttraining. No significant activation changes in the FG were observed. Trained participants showed behavioral improvements, which were accompanied by higher BOLD fMRI signals in the superior parietal lobule and maintained activation in the right medial occipital gyrus.