12 resultados para Arrhenius expressions
em Aston University Research Archive
Resumo:
Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223–233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.
Resumo:
The recognition of faces and of facial expressions in an important evolutionary skill, and an integral part of social communication. It has been argued that the processing of faces is distinct from the processing of non-face stimuli and functional neuroimaging investigations have even found evidence of a distinction between the perception of faces and of emotional expressions. Structural and temporal correlates of face perception and facial affect have only been separately identified. Investigation neural dynamics of face perception per se as well as facial affect would allow the mapping of these in space, time and frequency specific domains. Participants were asked to perform face categorisation and emotional discrimination tasks and Magnetoencephalography (MEG) was used to measure the neurophysiology of face and facial emotion processing. SAM analysis techniques enable the investigation of spectral changes within specific time-windows and frequency bands, thus allowing the identification of stimulus specific regions of cortical power changes. Furthermore, MEG’s excellent temporal resolution allows for the detection of subtle changes associated with the processing of face and non-face stimuli and different emotional expressions. The data presented reveal that face perception is associated with spectral power changes within a distributed cortical network comprising occipito-temporal as well as parietal and frontal areas. For the perception of facial affect, spectral power changes were also observed within frontal and limbic areas including the parahippocampal gyrus and the amygdala. Analyses of temporal correlates also reveal a distinction between the processing of faces and facial affect. Face perception per se occurred at earlier latencies whereas the discrimination of facial expression occurred within a longer time-window. In addition, the processing of faces and facial affect was differentially associated with changes in cortical oscillatory power for alpha, beta and gamma frequencies. The perception of faces and facial affect is associated with distinct changes in cortical oscillatory activity that can be mapped to specific neural structures, specific time-windows and latencies as well as specific frequency bands. Therefore, the work presented in this thesis provides further insight into the sequential processing of faces and facial affect.
Resumo:
Cognitive linguistics scholars argue that metaphor is fundamentally a conceptual process of mapping one domain of experience onto another domain. The study of metaphor in the context of Translation Studies has not, unfortunately, kept pace with the discoveries about the nature and role of metaphor in the cognitive sciences. This study aims primarily to fill part of this gap of knowledge. Specifically, the thesis is an attempt to explore some implications of the conceptual theory of metaphor for translation. Because the study of metaphor in translation is also based on views about the nature of translation, the thesis first presents a general overview of the discipline of Translation Studies, describing the major models of translation. The study (in Chapter Two) then discusses the major traditional theories of metaphor (comparison, substitution and interaction theories) and shows how the ideas of those theories were adopted in specific translation studies of metaphor. After that, the study presents a detailed account of the conceptual theory of metaphor and some hypothetical implications for the study of metaphor in translation from the perspective of cognitive linguistics. The data and methodology are presented in Chapter Four. A novel classification of conceptual metaphor is presented which distinguishes between different source domains of conceptual metaphors: physical, human-life and intertextual. It is suggested that each source domain places different demands on translators. The major sources of the data for this study are (1) the translations done by the Foreign Broadcasting Information Service (FBIS), which is a translation service of the Central Intelligence Agency (CIA) in the United Sates of America, of a number of speeches by the Iraqi president Saddam Hussein during the Gulf Crisis (1990-1991) and (2) official (governmental) Omani translations of National Day speeches of Sultan Qaboos bin Said of Oman.
Resumo:
Background - Difficulties in emotion processing and poor social function are common to bipolar disorder (BD) and major depressive disorder (MDD) depression, resulting in many BD depressed individuals being misdiagnosed with MDD. The amygdala is a key region implicated in processing emotionally salient stimuli, including emotional facial expressions. It is unclear, however, whether abnormal amygdala activity during positive and negative emotion processing represents a persistent marker of BD regardless of illness phase or a state marker of depression common or specific to BD and MDD depression. Methods - Sixty adults were recruited: 15 depressed with BD type 1 (BDd), 15 depressed with recurrent MDD, 15 with BD in remission (BDr), diagnosed with DSM-IV and Structured Clinical Interview for DSM-IV Research Version criteria; and 15 healthy control subjects (HC). Groups were age- and gender ratio-matched; patient groups were matched for age of illness onset and illness duration; depressed groups were matched for depression severity. The BDd were taking more psychotropic medication than other patient groups. All individuals participated in three separate 3T neuroimaging event-related experiments, where they viewed mild and intense emotional and neutral faces of fear, happiness, or sadness from a standardized series. Results - The BDd—relative to HC, BDr, and MDD—showed elevated left amygdala activity to mild and neutral facial expressions in the sad (p < .009) but not other emotion experiments that was not associated with medication. There were no other significant between-group differences in amygdala activity. Conclusions - Abnormally elevated left amygdala activity to mild sad and neutral faces might be a depression-specific marker in BD but not MDD, suggesting different pathophysiologic processes for BD versus MDD depression.
Resumo:
Structural analysis in handwritten mathematical expressions focuses on interpreting the recognized symbols using geometrical information such as relative sizes and positions of the symbols. Most existing approaches rely on hand-crafted grammar rules to identify semantic relationships among the recognized mathematical symbols. They could easily fail when writing errors occurred. Moreover, they assume the availability of the whole mathematical expression before being able to analyze the semantic information of the expression. To tackle these problems, we propose a progressive structural analysis (PSA) approach for dynamic recognition of handwritten mathematical expressions. The proposed PSA approach is able to provide analysis result immediately after each written input symbol. This has an advantage that users are able to detect any recognition errors immediately and correct only the mis-recognized symbols rather than the whole expression. Experiments conducted on 57 most commonly used mathematical expressions have shown that the PSA approach is able to achieve very good performance results.
Resumo:
Sixteen clinically depressed patients and sixteen healthy controls were presented with a set of emotional facial expressions and were asked to identify the emotion portrayed by each face. They, were subsequently given a recognition memory test for these faces. There was no difference between the groups in terms of their ability to identify emotion between from faces. All participants identified emotional expressions more accurately than neutral expressions, with happy expressions being identified most accurately. During the recognition memory phase the depressed patients demonstrated superior memory for sad expressions, and inferior memory for happy expressions, relative to neutral expressions. Conversely, the controls demonstrated superior memory for happy expressions, and inferior memory for sad expressions, relative to neutral expressions. These results are discussed in terms of the cognitive model of depression proposed by Williams, Watts, MacLeod, and Mathews (1997).
Resumo:
We develop an analytical theory which allows us to identify the information spectral density limits of multimode optical fiber transmission systems. Our approach takes into account the Kerr-effect induced interactions of the propagating spatial modes and derives closed-form expressions for the spectral density of the corresponding nonlinear distortion. Experimental characterization results have confirmed the accuracy of the proposed models. Application of our theory in different FMF transmission scenarios has predicted a ~10% variation in total system throughput due to changes associated with inter-mode nonlinear interactions, in agreement with an observed 3dB increase in nonlinear noise power spectral density for a graded index four LP mode fiber. © 2013 Optical Society of America.
Resumo:
Impaired facial expression recognition has been associated with features of major depression, which could underlie some of the difficulties in social interactions in these patients. Patients with major depressive disorder and age- and gender-matched healthy volunteers judged the emotion of 100 facial stimuli displaying different intensities of sadness and happiness and neutral expressions presented for short (100 ms) and long (2,000 ms) durations. Compared with healthy volunteers, depressed patients demonstrated subtle impairments in discrimination accuracy and a predominant bias away from the identification as happy of mildly happy expressions. The authors suggest that, in depressed patients, the inability to accurately identify subtle changes in facial expression displayed by others in social situations may underlie the impaired interpersonal functioning.
Resumo:
This study uses a purpose-built corpus to explore the linguistic legacy of Britain’s maritime history found in the form of hundreds of specialised ‘Maritime Expressions’ (MEs), such as TAKEN ABACK, ANCHOR and ALOOF, that permeate modern English. Selecting just those expressions commencing with ’A’, it analyses 61 MEs in detail and describes the processes by which these technical expressions, from a highly specialised occupational discourse community, have made their way into modern English. The Maritime Text Corpus (MTC) comprises 8.8 million words, encompassing a range of text types and registers, selected to provide a cross-section of ‘maritime’ writing. It is analysed using WordSmith analytical software (Scott, 2010), with the 100 million-word British National Corpus (BNC) as a reference corpus. Using the MTC, a list of keywords of specific salience within the maritime discourse has been compiled and, using frequency data, concordances and collocations, these MEs are described in detail and their use and form in the MTC and the BNC is compared. The study examines the transformation from ME to figurative use in the general discourse, in terms of form and metaphoricity. MEs are classified according to their metaphorical strength and their transference from maritime usage into new registers and domains such as those of business, politics, sports and reportage etc. A revised model of metaphoricity is developed and a new category of figurative expression, the ‘resonator’, is proposed. Additionally, developing the work of Lakov and Johnson, Kovesces and others on Conceptual Metaphor Theory (CMT), a number of Maritime Conceptual Metaphors are identified and their cultural significance is discussed.
Resumo:
Holistic face perception, i.e. the mandatory integration of featural information across the face, hasbeen considered to play a key role when recognizing emotional face expressions (e.g., Tanaka et al.,2002). However, despite their early onset holistic processing skills continue to improvethroughout adolescence (e.g., Schwarzer et al., 2010) and therefore might modulate theevaluation of facial expressions. We tested this hypothesis using an attentional blink (AB)paradigm to compare the impact of happy, fearful and neutral faces in adolescents (10–13 years)and adults on subsequently presented neutral target stimuli (animals, plants and objects) in a rapidserial visual presentation stream. Adolescents and adults were found to be equally reliable whenreporting the emotional expression of the face stimuli. However, the detection of emotional butnot neutral faces imposed a significantly stronger AB effect on the detection of the neutral targetsin adults compared to adolescents. In a control experiment we confirmed that adolescents ratedemotional faces lower in terms of valence and arousal than adults. The results suggest a protracteddevelopment of the ability to evaluate facial expressions that might be attributed to the latematuration of holistic processing skills.
Resumo:
In conflicts, political attitudes are based to some extent on the perception of the outgroup as sharing the goal of peace and supporting steps to achieve it. However, intractable conflicts are characterized by inconsistent and negative interactions, which prevent clear messages of outgroup support. This problem calls for alternative ways to convey support between groups in conflict. One such method is emotional expressions. The current research tested whether, in the absence of outgroup support for peace, observing expressions of outgroup hope induces conciliatory attitudes. Results from two experimental studies, conducted within the Israeli-Palestinian conflict, revealed support for this hypothesis. Expressions of Palestinian hope induced acceptance of a peace agreement through Israeli hope and positive perceptions of the proposal when outgroup support expressions were low. Findings demonstrate the importance of hope as a means of conveying information within processes of conflict resolution, overriding messages of low outgroup support for peace.
Resumo:
The growth of social networking platforms has drawn a lot of attentions to the need for social computing. Social computing utilises human insights for computational tasks as well as design of systems that support social behaviours and interactions. One of the key aspects of social computing is the ability to attribute responsibility such as blame or praise to social events. This ability helps an intelligent entity account and understand other intelligent entities’ social behaviours, and enriches both the social functionalities and cognitive aspects of intelligent agents. In this paper, we present an approach with a model for blame and praise detection in text. We build our model based on various theories of blame and include in our model features used by humans determining judgment such as moral agent causality, foreknowledge, intentionality and coercion. An annotated corpus has been created for the task of blame and praise detection from text. The experimental results show that while our model gives similar results compared to supervised classifiers on classifying text as blame, praise or others, it outperforms supervised classifiers on more finer-grained classification of determining the direction of blame and praise, i.e., self-blame, blame-others, self-praise or praise-others, despite not using labelled training data.