993 resultados para emotional speech
Resumo:
This paper reports on a six month longitudinal study exploring people’s personal and social emotional experience with Portable Interactive Devices (PIDs). The study is concerned with the experience design approach and based on the theoretical framework of Activity Theory. The focus is on emotional experiences and how artefacts mediate and potentially enhance this experience. The outcomes of the study identified interesting aspects of PID interaction. Findings revealed people interact with PIDs emotionally both at a personal and a social level, supporting previous studies. Further, the social level impacts significantly on the emotional experience attained. If negative social experiences exceeded negative personal experiences the emotional experience was constant over six months. If negative personal experiences surpassed negative social experiences the emotional experience was varied over six months. The findings are discussed in regards to their significance to the field of design, their implication for future PID design and future research directions.
Resumo:
Unresolved painful emotional experiences such as bereavement, trauma and disturbances in core relationships, are common presenting problems for clients of psychodrama or psychotherapy more generally. Emotional pain is experienced as a shattering of the sense of self and disconnection from others and, when unresolved, produces avoidant responses which inhibit the healing process. There is agreement across therapeutic modalities that exposure to emotional experience can increase the efficacy of therapeutic interventions. Moreno proposes that the activation of spontaneity is the primary curative factor in psychodrama and that healing occurs when the protagonist (client) engages with his or her wider social system and develops greater flexibility in response to that system. An extensive case-report literature describes the application of the psychodrama method in healing unresolved painful emotional experiences, but there is limited empirical research to verify the efficacy of the method or to identify the processes that are linked to therapeutic change. The purpose of this current research was to construct a model of protagonist change processes that could extend psychodrama theory, inform practitioners’ therapeutic decisions and contribute to understanding the common factors in therapeutic change. Four studies investigated protagonist processes linked to in-session resolution of painful emotional experiences. Significant therapeutic events were analysed using recordings and transcripts of psychodrama enactments, protagonist and director recall interviews and a range of process and outcome measures. A preliminary study (3 cases) identified four themes that were associated with helpful therapeutic events: enactment, the working alliance with the director and with group members, emotional release or relief and social atom repair. The second study (7 cases) used Comprehensive Process Analysis (CPA) to construct a model of protagonists’ processes linked to in-session resolution. This model was then validated across four more cases in Study 3. Five meta-processes were identified: (i) a readiness to engage in the psychodrama process; (ii) re-experiencing and insight; (iii) activating resourcefulness; (iv) social atom repair with emotional release and (v) integration. Social atom repair with emotional release involved deeply experiencing a wished-for interpersonal experience accompanied by a free flowing release of previously restricted emotion and was most clearly linked to protagonists’ reports of reaching resolution and to post session improvements in interpersonal relationships and sense of self. Acceptance of self in the moment increased protagonists’ capacity to generate new responses within each meta-process and, in resolved cases, there was evidence of spontaneity developing over time. The fourth study tested Greenberg’s allowing and accepting painful emotional experience model as an alternative explanation of protagonist change. The findings of this study suggested that while the process of allowing emotional pain was present in resolved cases, Greenberg’s model was not sufficient to explain the processes that lead to in-session resolution. The protagonist’s readiness to engage and activation of resourcefulness appear to facilitate the transition from problem identification to emotional release. Furthermore, experiencing a reparative relationship was found to be central to the healing process. This research verifies that there can be in-session resolution of painful emotional experience during psychodrama and protagonists’ reports suggest that in-session resolution can heal the damage to the sense of self and the interpersonal disconnection that are associated with unresolved emotional pain. A model of protagonist change processes has been constructed that challenges the view of psychodrama as a primarily cathartic therapy, by locating the therapeutic experience of emotional release within the development of new role relationships. The five meta-processes which are described within the model suggest broad change principles which can assist practitioners to make sense of events as they unfold and guide their clinical decision making in the moment. Each meta-process was linked to specific post-session changes, so that the model can inform the development of therapeutic plans for individual clients and can aid communication for practitioners when a psychodrama intervention is used for a specific therapeutic purpose within a comprehensive program of therapy.
Resumo:
In an automotive environment, the performance of a speech recognition system is affected by environmental noise if the speech signal is acquired directly from a microphone. Speech enhancement techniques are therefore necessary to improve the speech recognition performance. In this paper, a field-programmable gate array (FPGA) implementation of dual-microphone delay-and-sum beamforming (DASB) for speech enhancement is presented. As the first step towards a cost-effective solution, the implementation described in this paper uses a relatively high-end FPGA device to facilitate the verification of various design strategies and parameters. Experimental results show that the proposed design can produce output waveforms close to those generated by a theoretical (floating-point) model with modest usage of FPGA resources. Speech recognition experiments are also conducted on enhanced in-car speech waveforms produced by the FPGA in order to compare recognition performance with the floating-point representation running on a PC.
Resumo:
This research examined for the first time the relationship between emotional manipulation, emotional intelligence, and primary and secondary psychopathy. As predicted, in Study 1 (N = 73), emotional manipulation was related to both primary and secondary psychopathy. Only secondary psychopathy was related to perceived poor emotional skills. Secondary psychopathy was also related to emotional concealment. Emotional intelligence was negatively related to perceived poor emotional skills, emotional concealment, and primary and secondary psychopathy. In Study 2 (N = 275), two additional variables were included: alexithymia and ethical position. It was found that for males, primary psychopathy and emotional intelligence predicted emotional manipulation, while for females emotional intelligence acted as a suppressor, and ethical idealism and secondary psychopathy were additional predictors. For males, emotional intelligence and alexithymia were related to perceived poor emotional skills, while for females emotional intelligence, but not alexithymia, predicted perceived poor emotional skills, with ethical idealism acting as a suppressor. For both males and females, alexithymia predicted emotional concealment. These findings suggest that the mechanisms behind the emotional manipulation–psychopathy relationship differ as a function of gender. Examining the different aspects of emotional manipulation as separate but related constructs may enhance understanding of the construct of emotional manipulation.
Resumo:
Purpose: The classic study of Sumby and Pollack (1954, JASA, 26(2), 212-215) demonstrated that visual information aided speech intelligibility under noisy auditory conditions. Their work showed that visual information is especially useful under low signal-to-noise conditions where the auditory signal leaves greater margins for improvement. We investigated whether simulated cataracts interfered with the ability of participants to use visual cues to help disambiguate the auditory signal in the presence of auditory noise. Methods: Participants in the study were screened to ensure normal visual acuity (mean of 20/20) and normal hearing (auditory threshold ≤ 20 dB HL). Speech intelligibility was tested under an auditory only condition and two visual conditions: normal vision and simulated cataracts. The light scattering effects of cataracts were imitated using cataract-simulating filters. Participants wore blacked-out glasses in the auditory only condition and lens-free frames in the normal auditory-visual condition. Individual sentences were spoken by a live speaker in the presence of prerecorded four-person background babble set to a speech-to-noise ratio (SNR) of -16 dB. The SNR was determined in a preliminary experiment to support 50% correct identification of sentence under the auditory only conditions. The speaker was trained to match the rate, intensity and inflections of a prerecorded audio track of everyday speech sentences. The speaker was blind to the visual conditions of the participant to control for bias.Participants’ speech intelligibility was measured by comparing the accuracy of their written account of what they believed the speaker to have said to the actual spoken sentence. Results: Relative to the normal vision condition, speech intelligibility was significantly poorer when participants wore simulated catarcts. Conclusions: The results suggest that cataracts may interfere with the acquisition of visual cues to speech perception.
Resumo:
In recent years, a 'cultural turn' in the study of class has resulted in a rich body of work detailing the ways in which class advantage and disadvantage are emotionally inscribed and embodied in educational settings. To date, however, much of this literature has focused on the urban sphere. In order to address this gap in the literature, this paper focuses on the affective evaluations made by teachers employed in rural and remote Australian schools of students' families, bodies, expectations and practices. The central argument is that moral ascriptions of class by the teachers are powerfully shaped by dominant socio-cultural constructions of rurality that equate 'the rural' with agriculture.
Resumo:
The purpose of this chapter is to describe the use of caricatured contrasting scenarios (Bødker, 2000) and how they can be used to consider potential designs for disruptive technologies. The disruptive technology in this case is Automatic Speech Recognition (ASR) software in workplace settings. The particular workplace is the Magistrates Court of the Australian Capital Territory.----- Caricatured contrasting scenarios are ideally suited to exploring how ASR might be implemented in a particular setting because they allow potential implementations to be “sketched” quickly and with little effort. This sketching of potential interactions and the emphasis of both positive and negative outcomes allows the benefits and pitfalls of design decisions to become apparent.----- A brief description of the Court is given, describing the reasons for choosing the Court for this case study. The work of the Court is framed as taking place in two modes: Front of house, where the courtroom itself is, and backstage, where documents are processed and the business of the court is recorded and encoded into various systems.----- Caricatured contrasting scenarios describing the introduction of ASR to the front of house are presented and then analysed. These scenarios show that the introduction of ASR to the court would be highly problematic.----- The final section describes how ASR could be re-imagined in order to make it useful for the court. A final scenario is presented that describes how this re-imagined ASR could be integrated into both the front of house and backstage of the court in a way that could strengthen both processes.
Resumo:
The experience of emotional expression in the context of social relations is not well understood for people diagnosed with schizophrenia. Early phenomenological research on the experience of people diagnosed with schizophrenia traditionally focussed on self experience in isolation from others, with later research explicating isolated aspects of self experience in relation to others. The current research aimed to focus on the progressive experience of emotional expression of people diagnosed with schizophrenia in relation to others over 12 months, in order to gain a broad spectrum of experience. This study involved unstructured interviews with 7 participants, an average of 4 times each, over a period of 12 months. Due to the unstructured nature of the interviews, a great breadth of experience was explicated. From the interviews there emerged 6 themes grouped together as a transition into, and 5 themes grouped together as a recovery from, symptoms associated with a diagnosis of schizophrenia. Special significance was given to the theme of relational confusion as an experience that provides an understanding of the relationship between social stressors and personal characteristics with responses that are associated with a diagnosis of schizophrenia. It was suggested that participants experienced themselves, including their distancing and isolating responses, as a part of a social system. The breadth of experiences that emerged afforded a framework of experiences within which prior phenomenological research findings on static moments of experience have been located. A more meaningful understanding of the transitioning into and recovery from the experiences associated with a diagnosis of schizophrenia will afford advances in mental health practice.
Resumo:
Automatic Speech Recognition (ASR) has matured into a technology which is becoming more common in our everyday lives, and is emerging as a necessity to minimise driver distraction when operating in-car systems such as navigation and infotainment. In “noise-free” environments, word recognition performance of these systems has been shown to approach 100%, however this performance degrades rapidly as the level of background noise is increased. Speech enhancement is a popular method for making ASR systems more ro- bust. Single-channel spectral subtraction was originally designed to improve hu- man speech intelligibility and many attempts have been made to optimise this algorithm in terms of signal-based metrics such as maximised Signal-to-Noise Ratio (SNR) or minimised speech distortion. Such metrics are used to assess en- hancement performance for intelligibility not speech recognition, therefore mak- ing them sub-optimal ASR applications. This research investigates two methods for closely coupling subtractive-type enhancement algorithms with ASR: (a) a computationally-efficient Mel-filterbank noise subtraction technique based on likelihood-maximisation (LIMA), and (b) in- troducing phase spectrum information to enable spectral subtraction in the com- plex frequency domain. Likelihood-maximisation uses gradient-descent to optimise parameters of the enhancement algorithm to best fit the acoustic speech model given a word se- quence known a priori. Whilst this technique is shown to improve the ASR word accuracy performance, it is also identified to be particularly sensitive to non-noise mismatches between the training and testing data. Phase information has long been ignored in spectral subtraction as it is deemed to have little effect on human intelligibility. In this work it is shown that phase information is important in obtaining highly accurate estimates of clean speech magnitudes which are typically used in ASR feature extraction. Phase Estimation via Delay Projection is proposed based on the stationarity of sinusoidal signals, and demonstrates the potential to produce improvements in ASR word accuracy in a wide range of SNR. Throughout the dissertation, consideration is given to practical implemen- tation in vehicular environments which resulted in two novel contributions – a LIMA framework which takes advantage of the grounding procedure common to speech dialogue systems, and a resource-saving formulation of frequency-domain spectral subtraction for realisation in field-programmable gate array hardware. The techniques proposed in this dissertation were evaluated using the Aus- tralian English In-Car Speech Corpus which was collected as part of this work. This database is the first of its kind within Australia and captures real in-car speech of 50 native Australian speakers in seven driving conditions common to Australian environments.
Resumo:
A self-report measure of the emotional and behavioural reactions to intrusive thoughts was developed. The paper presents data that confirm the stability, reliability and validity of the new 7-item measure. Emotional and behavioural reactions to intrusions emerged as separate factors on the Emotional and Behavioural Reactions to Intrusions Questionnaire (EBRIQ), a finding confirmed by an independent stress study. Test retest reliability over 30-70 days was good. Expected relationships with other constructs were significant. Stronger negative responses to intrusions were associated with lower mindfulness scores and higher ratings of experiential avoidance, thought suppression and intensity and frequency of craving. The EBRIQ will help explore differences in reactions to intrusive thoughts in clinical and non clinical populations, and across different emotional and behavioural states. It will also be useful in assessing the effects of therapeutic approaches such as mindfulness.
Resumo:
Acoustically, car cabins are extremely noisy and as a consequence audio-only, in-car voice recognition systems perform poorly. As the visual modality is immune to acoustic noise, using the visual lip information from the driver is seen as a viable strategy in circumventing this problem by using audio visual automatic speech recognition (AVASR). However, implementing AVASR requires a system being able to accurately locate and track the drivers face and lip area in real-time. In this paper we present such an approach using the Viola-Jones algorithm. Using the AVICAR [1] in-car database, we show that the Viola- Jones approach is a suitable method of locating and tracking the driver’s lips despite the visual variability of illumination and head pose for audio-visual speech recognition system.
Resumo:
Acoustically, car cabins are extremely noisy and as a consequence, existing audio-only speech recognition systems, for voice-based control of vehicle functions such as the GPS based navigator, perform poorly. Audio-only speech recognition systems fail to make use of the visual modality of speech (eg: lip movements). As the visual modality is immune to acoustic noise, utilising this visual information in conjunction with an audio only speech recognition system has the potential to improve the accuracy of the system. The field of recognising speech using both auditory and visual inputs is known as Audio Visual Speech Recognition (AVSR). Continuous research in AVASR field has been ongoing for the past twenty-five years with notable progress being made. However, the practical deployment of AVASR systems for use in a variety of real-world applications has not yet emerged. The main reason is due to most research to date neglecting to address variabilities in the visual domain such as illumination and viewpoint in the design of the visual front-end of the AVSR system. In this paper we present an AVASR system in a real-world car environment using the AVICAR database [1], which is publicly available in-car database and we show that the use of visual speech conjunction with the audio modality is a better approach to improve the robustness and effectiveness of voice-only recognition systems in car cabin environments.