894 resultados para emotion socialization
Resumo:
Study of emotions in human-computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested.
Resumo:
In this paper, a novel approach for mandarin speech emotion recognition, that is mandarin speech emotion recognition based on high dimensional geometry theory, is proposed. The human emotions are classified into 6 archetypal classes: fear, anger, happiness, sadness, surprise and disgust. According to the characteristics of these emotional speech signals, the amplitude, pitch frequency and formant are used as the feature parameters for speech emotion recognition. The new method called high dimensional geometry theory is applied for recognition. Compared with traditional GSVM model, the new method has some advantages. It is noted that this method has significant values for researches and applications henceforth.
Resumo:
Both animals and mobile robots, or animats, need adaptive control systems to guide their movements through a novel environment. Such control systems need reactive mechanisms for exploration, and learned plans to efficiently reach goal objects once the environment is familiar. How reactive and planned behaviors interact together in real time, and arc released at the appropriate times, during autonomous navigation remains a major unsolved problern. This work presents an end-to-end model to address this problem, named SOVEREIGN: A Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation system. The model comprises several interacting subsystems, governed by systems of nonlinear differential equations. As the animat explores the environment, a vision module processes visual inputs using networks that arc sensitive to visual form and motion. Targets processed within the visual form system arc categorized by real-time incremental learning. Simultaneously, visual target position is computed with respect to the animat's body. Estimates of target position activate a motor system to initiate approach movements toward the target. Motion cues from animat locomotion can elicit orienting head or camera movements to bring a never target into view. Approach and orienting movements arc alternately performed during animat navigation. Cumulative estimates of each movement, based on both visual and proprioceptive cues, arc stored within a motor working memory. Sensory cues are stored in a parallel sensory working memory. These working memories trigger learning of sensory and motor sequence chunks, which together control planned movements. Effective chunk combinations arc selectively enhanced via reinforcement learning when the animat is rewarded. The planning chunks effect a gradual transition from reactive to planned behavior. The model can read-out different motor sequences under different motivational states and learns more efficient paths to rewarded goals as exploration proceeds. Several volitional signals automatically gate the interactions between model subsystems at appropriate times. A 3-D visual simulation environment reproduces the animat's sensory experiences as it moves through a simplified spatial environment. The SOVEREIGN model exhibits robust goal-oriented learning of sequential motor behaviors. Its biomimctic structure explicates a number of brain processes which are involved in spatial navigation.
Resumo:
Lewis proposes "reconceptualization" (p. 1) of how to link the psychology and neurobiology of emotion and cognitive-emotional interactions. His main proposed themes have actually been actively and quantitatively developed in the neural modeling literature for over thirty years. This commentary summarizes some of these themes and points to areas of particularly active research in this area.
Resumo:
Emotional and attentional functions are known to be distributed along ventral and dorsal networks in the brain, respectively. However, the interactions between these systems remain to be specified. The present study used event-related functional magnetic resonance imaging (fMRI) to investigate how attentional focus can modulate the neural activity elicited by scenes that vary in emotional content. In a visual oddball task, aversive and neutral scenes were presented intermittently among circles and squares. The squares were frequent standard events, whereas the other novel stimulus categories occurred rarely. One experimental group [N=10] was instructed to count the circles, whereas another group [N=12] counted the emotional scenes. A main effect of emotion was found in the amygdala (AMG) and ventral frontotemporal cortices. In these regions, activation was significantly greater for emotional than neutral stimuli but was invariant to attentional focus. A main effect of attentional focus was found in dorsal frontoparietal cortices, whose activity signaled task-relevant target events irrespective of emotional content. The only brain region that was sensitive to both emotion and attentional focus was the anterior cingulate gyrus (ACG). When circles were task-relevant, the ACG responded equally to circle targets and distracting emotional scenes. The ACG response to emotional scenes increased when they were task-relevant, and the response to circles concomitantly decreased. These findings support and extend prominent network theories of emotion-attention interactions that highlight the integrative role played by the anterior cingulate.
Resumo:
The intensity and valence of 30 emotion terms, 30 events typical of those emotions, and 30 autobiographical memories cued by those emotions were each rated by different groups of 40 undergraduates. A vector model gave a consistently better account of the data than a circumplex model, both overall and in the absence of high-intensity, neutral valence stimuli. The Positive Activation - Negative Activation (PANA) model could be tested at high levels of activation, where it is identical to the vector model. The results replicated when ratings of arousal were used instead of ratings of intensity for the events and autobiographical memories. A reanalysis of word norms gave further support for the vector and PANA models by demonstrating that neutral valence, high-arousal ratings resulted from the averaging of individual positive and negative valence ratings. Thus, compared to a circumplex model, vector and PANA models provided overall better fits.
Resumo:
Autobiographical memories may be recalled from two different perspectives: Field memories in which the person seems to remember the scene from his/her original point of view and observer memories in which the rememberer sees him/herself in the memory image. Here, 122 undergraduates participated in an experiment examining the relation between field vs. observer perspective in memory for 10 different emotional states, including both positive and negative emotions and emotions associated with high vs. low intensity. Observer perspective was associated with reduced sensory and emotional reliving across all emotions. This effect was observed for naturally occurring memory perspective and when participants were instructed to change their perspective from field to observer, but not when participants were instructed to change perspective from observer to field.