8 resultados para disgust
em Queensland University of Technology - ePrints Archive
Resumo:
Is 'disappointment' and 'the teaching of disgust' the core of TV Studies? Or might teaching better be accomplished by inspiring positive civic action. Either way, doesn't reality TV do it better? John Hartley uses examples from reality TV to discuss this question.
Resumo:
Spontaneous facial expressions differ from posed ones in appearance, timing and accompanying head movements. Still images cannot provide timing or head movement information directly. However, indirectly the distances between key points on a face extracted from a still image using active shape models can capture some movement and pose changes. This information is superposed on information about non-rigid facial movement that is also part of the expression. Does geometric information improve the discrimination between spontaneous and posed facial expressions arising from discrete emotions? We investigate the performance of a machine vision system for discrimination between posed and spontaneous versions of six basic emotions that uses SIFT appearance based features and FAP geometric features. Experimental results on the NVIE database demonstrate that fusion of geometric information leads only to marginal improvement over appearance features. Using fusion features, surprise is the easiest emotion (83.4% accuracy) to be distinguished, while disgust is the most difficult (76.1%). Our results find different important facial regions between discriminating posed versus spontaneous version of one emotion and classifying the same emotion versus other emotions. The distribution of the selected SIFT features shows that mouth is more important for sadness, while nose is more important for surprise, however, both the nose and mouth are important for disgust, fear, and happiness. Eyebrows, eyes, nose and mouth are important for anger.
Resumo:
In automatic facial expression recognition, an increasing number of techniques had been proposed for in the literature that exploits the temporal nature of facial expressions. As all facial expressions are known to evolve over time, it is crucially important for a classifier to be capable of modelling their dynamics. We establish that the method of sparse representation (SR) classifiers proves to be a suitable candidate for this purpose, and subsequently propose a framework for expression dynamics to be efficiently incorporated into its current formulation. We additionally show that for the SR method to be applied effectively, then a certain threshold on image dimensionality must be enforced (unlike in facial recognition problems). Thirdly, we determined that recognition rates may be significantly influenced by the size of the projection matrix \Phi. To demonstrate these, a battery of experiments had been conducted on the CK+ dataset for the recognition of the seven prototypic expressions - anger, contempt, disgust, fear, happiness, sadness and surprise - and comparisons have been made between the proposed temporal-SR against the static-SR framework and state-of-the-art support vector machine.
Resumo:
Artists with disabilities working in Live Art paradigms often present performances which replay the social attitudes they are subject to in daily life as guerilla theatre in public spaces – including online spaces. In doing so, these artists draw spectators’ attention to the way their responses to disabled people contribute to the social construction of disability. They provide different theatrical, architectural or technological devices to encourage spectators to articulate their response to themselves and others. But – the use of exaggeration, comedy and confrontation in these practices notwithstanding – their blurry boundaries mean some spectators experience confusion as to whether they are responding to real life or a representation of it. This results in conflicted responses which reveal as much about the politics of disability as the performances themselves. In this paper, I examine how these conflicted responses play out in online forums. I discuss diverse examples, from blog comments on Liz Crow’s Resistance on the Plinth on YouTube, to Aaron Williamson and Katherine Araneillo’s Disabled Avant-Garde clips on YouTube, to Ju Gosling’s Letter Writing Project on her website, to segments of UK Channel 4’s mock reality show Cast Offs on YouTube. I demonstrate how online forums become a place not just for recording memories of an original performance (which posters may not have seen), but for a new performance, which goes well beyond re-membering/remediating the original. I identify trends in the way experience, memory and meaningmaking play out in these performative forums – moving from clarification of the original act’s parameters, to claims of disgust, insult or offense, to counter-claims confirming the comic or political efficacy of the act, often linked disclosure of personal memory or experience of disability. I examine the way these encounters at the interstices of live and/or online performance, memory, technology and public/private history negotiate ideas about disability, and what they tell us about the ethics and efficacy of the specific modes of performance and spectatorship these artists with disabilities are invoking.
Resumo:
In Transfigured Stages: Major Practitioners and Theatre Aesthetics in Australia, Margaret Hamilton traces the emergence of a postdramatic performance aesthetic in Australian theatre in the 1980s, 1990s and early 2000s through what she characterizes as an ‘analysis’ (p. 15) or ‘critique’ (p. 16)of a series of pivotal productions. For Hamilton, the transfigured aesthetic in the spotlight here is one typified by a focus on memory, imagination, desire, fear or disgust as facets of the human condition; by a visual, televisual or interactive dramaturgy; and, most critically, by a metatheatrical tendency to make tensions in the theatre-making process part and parcel of the tensions in the performance itself (pp.18–20)...
Resumo:
There is substantial evidence for facial emotion recognition (FER) deficits in autism spectrum disorder (ASD). The extent of this impairment, however, remains unclear, and there is some suggestion that clinical groups might benefit from the use of dynamic rather than static images. High-functioning individuals with ASD (n = 36) and typically developing controls (n = 36) completed a computerised FER task involving static and dynamic expressions of the six basic emotions. The ASD group showed poorer overall performance in identifying anger and disgust and were disadvantaged by dynamic (relative to static) stimuli when presented with sad expressions. Among both groups, however, dynamic stimuli appeared to improve recognition of anger. This research provides further evidence of specific impairment in the recognition of negative emotions in ASD, but argues against any broad advantages associated with the use of dynamic displays.
Resumo:
Representation of facial expressions using continuous dimensions has shown to be inherently more expressive and psychologically meaningful than using categorized emotions, and thus has gained increasing attention over recent years. Many sub-problems have arisen in this new field that remain only partially understood. A comparison of the regression performance of different texture and geometric features and investigation of the correlations between continuous dimensional axes and basic categorized emotions are two of these. This paper presents empirical studies addressing these problems, and it reports results from an evaluation of different methods for detecting spontaneous facial expressions within the arousal-valence dimensional space (AV). The evaluation compares the performance of texture features (SIFT, Gabor, LBP) against geometric features (FAP-based distances), and the fusion of the two. It also compares the prediction of arousal and valence, obtained using the best fusion method, to the corresponding ground truths. Spatial distribution, shift, similarity, and correlation are considered for the six basic categorized emotions (i.e. anger, disgust, fear, happiness, sadness, surprise). Using the NVIE database, results show that the fusion of LBP and FAP features performs the best. The results from the NVIE and FEEDTUM databases reveal novel findings about the correlations of arousal and valence dimensions to each of six basic emotion categories.
Resumo:
Neuroimaging research has shown localised brain activation to different facial expressions. This, along with the finding that schizophrenia patients perform poorly in their recognition of negative emotions, has raised the suggestion that patients display an emotion specific impairment. We propose that this asymmetry in performance reflects task difficulty gradations, rather than aberrant processing in neural pathways subserving recognition of specific emotions. A neural network model is presented, which classifies facial expressions on the basis of measurements derived from human faces. After training, the network showed an accuracy pattern closely resembling that of healthy subjects. Lesioning of the network led to an overall decrease in the network’s discriminant capacity, with the greatest accuracy decrease to fear, disgust and anger stimuli. This implies that the differential pattern of impairment in schizophrenia patients can be explained without having to postulate impairment of specific processing modules for negative emotion recognition.