23 resultados para Face recognition from video


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Emotional processing in essential hypertension beyond self-report questionnaire has hardly been investigated. The aim of this study is to examine associations between hypertension status and recognition of facial affect. Methods: 25 healthy, non-smoking, medication-free men including 13 hypertensive subjects aged between 20 and 65 years completed a computer-based task in order to examine sensitivity of recognition of facial affect. Neutral faces gradually changed to a specific emotion in a pseudo-continuous manner. Slides of the six basic emotions (fear, sadness, disgust, happiness, anger, surprise) were chosen from the „NimStim Set“. Pictures of three female and three male faces were electronically morphed in 1% steps of intensity from 0% to 100% (36 sets of faces with 100 pictures each). Each picture of a set was presented for one second, ranging from 0% to 100% of intensity. Participants were instructed to press a stop button as soon as they recognized the expression of the face. After stopping a forced choice between the six basic emotions was required. As dependent variables, we recorded the emotion intensity at which the presentation was stopped and the number of errors (error rate). Recognition sensitivity was calculated as emotion intensity of correctly identified emotions. Results: Mean arterial pressure was associated with a significantly increased recognition sensitivity of facial affect for the emotion anger (ß = - .43, p = 0.03*, Δ R2= .110). There was no association with the emotions fear, sadness, disgust, happiness, and surprise (p’s > .0.41). Mean arterial pressure did not relate to the mean number of errors for any of the facial emotions. Conclusions: Our findings suggest that an increased blood pressure is associated with increased recognition sensitivity of facial affect for the emotion anger, if a face shows anger. Hypertensives perceive facial anger expression faster than normotensives, if anger is shown.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we present a solution to the problem of action and gesture recognition using sparse representations. The dictionary is modelled as a simple concatenation of features computed for each action or gesture class from the training data, and test data is classified by finding sparse representation of the test video features over this dictionary. Our method does not impose any explicit training procedure on the dictionary. We experiment our model with two kinds of features, by projecting (i) Gait Energy Images (GEIs) and (ii) Motion-descriptors, to a lower dimension using Random projection. Experiments have shown 100% recognition rate on standard datasets and are compared to the results obtained with widely used SVM classifier.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Smart homes for the aging population have recently started attracting the attention of the research community. The "health state" of smart homes is comprised of many different levels; starting with the physical health of citizens, it also includes longer-term health norms and outcomes, as well as the arena of positive behavior changes. One of the problems of interest is to monitor the activities of daily living (ADL) of the elderly, aiming at their protection and well-being. For this purpose, we installed passive infrared (PIR) sensors to detect motion in a specific area inside a smart apartment and used them to collect a set of ADL. In a novel approach, we describe a technology that allows the ground truth collected in one smart home to train activity recognition systems for other smart homes. We asked the users to label all instances of all ADL only once and subsequently applied data mining techniques to cluster in-home sensor firings. Each cluster would therefore represent the instances of the same activity. Once the clusters were associated to their corresponding activities, our system was able to recognize future activities. To improve the activity recognition accuracy, our system preprocessed raw sensor data by identifying overlapping activities. To evaluate the recognition performance from a 200-day dataset, we implemented three different active learning classification algorithms and compared their performance: naive Bayesian (NB), support vector machine (SVM) and random forest (RF). Based on our results, the RF classifier recognized activities with an average specificity of 96.53%, a sensitivity of 68.49%, a precision of 74.41% and an F-measure of 71.33%, outperforming both the NB and SVM classifiers. Further clustering markedly improved the results of the RF classifier. An activity recognition system based on PIR sensors in conjunction with a clustering classification approach was able to detect ADL from datasets collected from different homes. Thus, our PIR-based smart home technology could improve care and provide valuable information to better understand the functioning of our societies, as well as to inform both individual and collective action in a smart city scenario.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE Vestibular neuritis is often mimicked by stroke (pseudoneuritis). Vestibular eye movements help discriminate the two conditions. We report vestibulo-ocular reflex (VOR) gain measures in neuritis and stroke presenting acute vestibular syndrome (AVS). METHODS Prospective cross-sectional study of AVS (acute continuous vertigo/dizziness lasting >24 h) at two academic centers. We measured horizontal head impulse test (HIT) VOR gains in 26 AVS patients using a video HIT device (ICS Impulse). All patients were assessed within 1 week of symptom onset. Diagnoses were confirmed by clinical examinations, brain magnetic resonance imaging with diffusion-weighted images, and follow-up. Brainstem and cerebellar strokes were classified by vascular territory-posterior inferior cerebellar artery (PICA) or anterior inferior cerebellar artery (AICA). RESULTS Diagnoses were vestibular neuritis (n = 16) and posterior fossa stroke (PICA, n = 7; AICA, n = 3). Mean HIT VOR gains (ipsilesional [standard error of the mean], contralesional [standard error of the mean]) were as follows: vestibular neuritis (0.52 [0.04], 0.87 [0.04]); PICA stroke (0.94 [0.04], 0.93 [0.04]); AICA stroke (0.84 [0.10], 0.74 [0.10]). VOR gains were asymmetric in neuritis (unilateral vestibulopathy) and symmetric in PICA stroke (bilaterally normal VOR), whereas gains in AICA stroke were heterogeneous (asymmetric, bilaterally low, or normal). In vestibular neuritis, borderline gains ranged from 0.62 to 0.73. Twenty patients (12 neuritis, six PICA strokes, two AICA strokes) had at least five interpretable HIT trials (for both ears), allowing an appropriate classification based on mean VOR gains per ear. Classifying AVS patients with bilateral VOR mean gains of 0.70 or more as suspected strokes yielded a total diagnostic accuracy of 90%, with stroke sensitivity of 88% and specificity of 92%. CONCLUSION Video HIT VOR gains differ between peripheral and central causes of AVS. PICA strokes were readily separated from neuritis using gain measures, but AICA strokes were at risk of being misclassified based on VOR gain alone.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE: To evaluate the quantitative and topographic relationship between reticular pseudodrusen (RPD) on infrared reflectance (IR) and subretinal drusenoid deposits (SDD) on en face volumetric spectral domain optical coherence tomography. METHODS: Reticular pseudodrusen were marked on IR images by a masked observer. Subretinal drusenoid deposits were visualized on en face sections of spectral domain optical coherence tomography below the external limiting membrane and identified by a semiautomated technique. Control RPD lesions were generated in a random distribution for each IR image. Binary maps of control and experimental RPD and SDD were merged and analyzed in terms of topographic localization and quantitative drusen load comparison. RESULTS: A total of 54 eyes of 41 patients diagnosed with RPD were included in this study. The average number of RPD lesions on IR images was 320 ± 44.62 compared with 127 ± 26.02 SDD lesions on en face (P < 0.001). The majority of RPD lesions did not overlap with SDD lesions and were located >30 μm away (92%). The percentage of total SDD lesions overlapping RPD was 2.91 ± 0.87% compared with 1.73 ± 0.68% overlapping control RPD lesions (P < 0.05). The percentage of total SDD lesions between 1 and 3 pixels of the nearest RPD lesion was 5.08 ± 1.40% compared with 3.33 ± 1.07% between 1 and 3 pixels of the nearest control RPD lesion (P < 0.05). CONCLUSION: This study identified significantly more RPD lesions on IR compared with SDD lesions on en face spectral domain optical coherence tomography and found that a large majority of SDD (>90% of lesions) were >30 μm away from the nearest RPD. Together, our findings indicate that RPD and SDD are two entities that are only occasionally topographically associated, suggesting that at some stage in their development, they may be pathologically related.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The human turn-taking system regulates the smooth and precise exchange of speaking turns during face-to-face interaction. Recent studies investigated the processing of ongoing turns during conversation by measuring the eye movements of noninvolved observers. The findings suggest that humans shift their gaze in anticipation to the next speaker before the start of the next turn. Moreover, there is evidence that the ability to timely detect turn transitions mainly relies on the lexico-syntactic content provided by the conversation. Consequently, patients with aphasia, who often experience deficits in both semantic and syntactic processing, might encounter difficulties to detect and timely shift their gaze at turn transitions. To test this assumption, we presented video vignettes of natural conversations to aphasic patients and healthy controls, while their eye movements were measured. The frequency and latency of event-related gaze shifts, with respect to the end of the current turn in the videos, were compared between the two groups. Our results suggest that, compared with healthy controls, aphasic patients have a reduced probability to shift their gaze at turn transitions but do not show significantly increased gaze shift latencies. In healthy controls, but not in aphasic patients, the probability to shift the gaze at turn transition was increased when the video content of the current turn had a higher lexico-syntactic complexity. Furthermore, the results from voxel-based lesion symptom mapping indicate that the association between lexico-syntactic complexity and gaze shift latency in aphasic patients is predicted by brain lesions located in the posterior branch of the left arcuate fasciculus. Higher lexico-syntactic processing demands seem to lead to a reduced gaze shift probability in aphasic patients. This finding may represent missed opportunities for patients to place their contributions during everyday conversation.