165 resultados para Emotion annotation-scheme
Resumo:
Recent reform of the EU’s Common Agricultural Policy (CAP) has led to a further decoupling of farm support. The EU believes that the new Single Payment Scheme, which replaces the former system of area and headage payments to farmers, tied to production, will qualify for green-box status in the WTO. We examine this contention, particularly in light of the recent WTO panel report on upland cotton.
Resumo:
We show that the Hájek (Ann. Math Statist. (1964) 1491) variance estimator can be used to estimate the variance of the Horvitz–Thompson estimator when the Chao sampling scheme (Chao, Biometrika 69 (1982) 653) is implemented. This estimator is simple and can be implemented with any statistical packages. We consider a numerical and an analytic method to show that this estimator can be used. A series of simulations supports our findings.
Resumo:
Motivation: There is a frequent need to apply a large range of local or remote prediction and annotation tools to one or more sequences. We have created a tool able to dispatch one or more sequences to assorted services by defining a consistent XML format for data and annotations. Results: By analyzing annotation tools, we have determined that annotations can be described using one or more of the six forms of data: numeric or textual annotation of residues, domains (residue ranges) or whole sequences. With this in mind, XML DTDs have been designed to store the input and output of any server. Plug-in wrappers to a number of services have been written which are called from a master script. The resulting APATML is then formatted for display in HTML. Alternatively further tools may be written to perform post-analysis.
Resumo:
Decoding emotional prosody is crucial for successful social interactions, and continuous monitoring of emotional intent via prosody requires working memory. It has been proposed by Ross and others that emotional prosody cognitions in the right hemisphere are organized in an analogous fashion to propositional language functions in the left hemisphere. This study aimed to test the applicability of this model in the context of prefrontal cortex working memory functions. BOLD response data were therefore collected during performance of two emotional working memory tasks by participants undergoing fMRI. In the prosody task, participants identified the emotion conveyed in pre-recorded sentences, and working memory load was manipulated in the style of an N-back task. In the matched lexico-semantic task, participants identified the emotion conveyed by sentence content. Block-design neuroimaging data were analyzed parametrically with SPM5. At first, working memory for emotional prosody appeared to be right-lateralized in the PFC, however, further analyses revealed that it shared much bilateral prefrontal functional neuroanatomy with working memory for lexico-semantic emotion. Supplementary separate analyses of males and females suggested that these language functions were less bilateral in females, but their inclusion did not alter the direction of laterality. It is concluded that Ross et al.'s model is not applicable to prefrontal cortex working memory functions, that evidence that working memory cannot be subdivided in prefrontal cortex according to material type is increased, and that incidental working memory demands may explain the frontal lobe involvement in emotional prosody comprehension as revealed by neuroimaging studies. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
Recent brain imaging studies using functional magnetic resonance imaging (fMRI) have implicated insula and anterior cingulate cortices in the empathic response to another's pain. However, virtually nothing is known about the impact of the voluntary generation of compassion on this network. To investigate these questions we assessed brain activity using fMRI while novice and expert meditation practitioners generated a loving-kindness-compassion meditation state. To probe affective reactivity, we presented emotional and neutral sounds during the meditation and comparison periods. Our main hypothesis was that the concern for others cultivated during this form of meditation enhances affective processing, in particular in response to sounds of distress, and that this response to emotional sounds is modulated by the degree of meditation training. The presentation of the emotional sounds was associated with increased pupil diameter and activation of limbic regions (insula and cingulate cortices) during meditation (versus rest). During meditation, activation in insula was greater during presentation of negative sounds than positive or neutral sounds in expert than it was in novice meditators. The strength of activation in insula was also associated with self-reported intensity of the meditation for both groups. These results support the role of the limbic circuitry in emotion sharing. The comparison between meditation vs. rest states between experts and novices also showed increased activation in amygdala, right temporo-parietal junction (TPJ), and right posterior superior temporal sulcus (pSTS) in response to all sounds, suggesting, greater detection of the emotional sounds, and enhanced mentation in response to emotional human vocalizations for experts than novices during meditation. Together these data indicate that the mental expertise to cultivate positive emotion alters the activation of circuitries previously linked to empathy and theory of mind in response to emotional stimuli.
Resumo:
We examined whether it is possible to identify the emotional content of behaviour from point-light displays where pairs of actors are engaged in interpersonal communication. These actors displayed a series of emotions, which included sadness, anger, joy, disgust, fear, and romantic love. In experiment 1, subjects viewed brief clips of these point-light displays presented the right way up and upside down. In experiment 2, the importance of the interaction between the two figures in the recognition of emotion was examined. Subjects were shown upright versions of (i) the original pairs (dyads), (ii) a single actor (monad), and (iii) a dyad comprising a single actor and his/her mirror image (reflected dyad). In each experiment, the subjects rated the emotional content of the displays by moving a slider along a horizontal scale. All of the emotions received a rating for every clip. In experiment 1, when the displays were upright, the correct emotions were identified in each case except disgust; but, when the displays were inverted, performance was significantly diminished for some ernotions. In experiment 2, the recognition of love and joy was impaired by the absence of the acting partner, and the recognition of sadness, joy, and fear was impaired in the non-veridical (mirror image) displays. These findings both support and extend previous research by showing that biological motion is sufficient for the perception of emotion, although inversion affects performance. Moreover, emotion perception from biological motion can be affected by the veridical or non-veridical social context within the displays.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.