862 resultados para voice analysis
em Queensland University of Technology - ePrints Archive
Resumo:
Details of a project which fictionalises the oral history of the life of the author's polio-afflicted grandmother Beth Bevan and her experiences at a home for children with disabilities are presented. The speech and language patterns recognised in the first person narration are described, as also the sense of voice and identity communicated through the oral history.
Resumo:
This paper presents a method of voice activity detection (VAD) suitable for high noise scenarios, based on the fusion of two complementary systems. The first system uses a proposed non-Gaussianity score (NGS) feature based on normal probability testing. The second system employs a histogram distance score (HDS) feature that detects changes in the signal through conducting a template-based similarity measure between adjacent frames. The decision outputs by the two systems are then merged using an open-by-reconstruction fusion stage. Accuracy of the proposed method was compared to several baseline VAD methods on a database created using real recordings of a variety of high-noise environments.
Resumo:
In this paper we discuss the failure of the employee voice system at the Bundaberg Base Hospital (BBH) in Australia. Surgeon Jayant Patel was arrested over the deaths of patients on whom he operated when he was the director of surgery at the hospital. Our interest is in the reasons the established employee voice mechanisms failed when employees attempted to bring serious issues to the attention of managers. Our data is based on an analysis of the sworn testimonies of participants who participated in two inquiries concerning these events. An analysis of the events with a particular focus on the failings of the voice system is presented. We ask the following: how and why did the voice systems in the case of the BBH fail?
Resumo:
While my PhD is practice-led research, it is my contention that such an inquiry cannot develop as long as it tries to emulate other models of research. I assert that practice-led research needs to account for an epistemological unknown or uncertainty central to the practice of art. By focusing on what I call the artist's 'voice,' I will show how this 'voice' is comprised of a dual motivation—'articulate' representation and 'inarticulate' affect—which do not even necessarily derive from the artist. Through an analysis of art-historical precedents, critical literature (the work of Jean-François Lyotard and Andrew Benjamin, the critical methods of philosophy, phenomenology and psychoanalysis) as well as of my own painting and digital arts practice, I aim to demonstrate how this unknown or uncertain aspect of artistic inquiry can be mapped. It is my contention that practice-led research needs to address and account for this dualistic 'voice' in order to more comprehensively articulate its unique contribution to research culture.
Resumo:
For several reasons, the Fourier phase domain is less favored than the magnitude domain in signal processing and modeling of speech. To correctly analyze the phase, several factors must be considered and compensated, including the effect of the step size, windowing function and other processing parameters. Building on a review of these factors, this paper investigates a spectral representation based on the Instantaneous Frequency Deviation, but in which the step size between processing frames is used in calculating phase changes, rather than the traditional single sample interval. Reflecting these longer intervals, the term delta-phase spectrum is used to distinguish this from instantaneous derivatives. Experiments show that mel-frequency cepstral coefficients features derived from the delta-phase spectrum (termed Mel-Frequency delta-phase features) can produce broadly similar performance to equivalent magnitude domain features for both voice activity detection and speaker recognition tasks. Further, it is shown that the fusion of the magnitude and phase representations yields performance benefits over either in isolation.
Resumo:
This paper presents a method of voice activity detection (VAD) for high noise scenarios, using a noise robust voiced speech detection feature. The developed method is based on the fusion of two systems. The first system utilises the maximum peak of the normalised time-domain autocorrelation function (MaxPeak). The second zone system uses a novel combination of cross-correlation and zero-crossing rate of the normalised autocorrelation to approximate a measure of signal pitch and periodicity (CrossCorr) that is hypothesised to be noise robust. The score outputs by the two systems are then merged using weighted sum fusion to create the proposed autocorrelation zero-crossing rate (AZR) VAD. Accuracy of AZR was compared to state of the art and standardised VAD methods and was shown to outperform the best performing system with an average relative improvement of 24.8% in half-total error rate (HTER) on the QUT-NOISE-TIMIT database created using real recordings from high-noise environments.
Resumo:
Research into complaints handling in the health care system has predominately focused on examining the processes that underpin the organisational systems. An understanding of the cognitive decisions made by patients that influence whether they are satisfied or dissatisfied with the care they are receiving has had limited attention thus far. This study explored the lived experiences of Queensland acute care patients who complained about some aspect of their inpatient stay. A purposive sample of sixteen participants was recruited and interviewed about their experience of making a complaint. The qualitative data gathered through the interview process was subjected to an Interpretative Phenomenological Analysis (IPA) approach, guided by the philosophical influences of Heidegger (1889-1976). As part of the interpretive endeavour of this study, Lazarus’ cognitive emotive model with situational challenge was drawn on to provide a contextual understanding of the emotions experienced by the study participants. Analysis of the research data, aided by Leximancer™ software, revealed a series of relational themes that supported the interpretative data analysis process undertaken. The superordinate thematic statements that emerged from the narratives via the hermeneutic process were ineffective communication, standards of care were not consistent, being treated with disrespect, information on how to complain was not clear, and perceptions of negligence. This study’s goal was to provide health services with information about complaints handling that can help them develop service improvements. The study patients articulated the need for health care system reform; they want to be listened to, to be acknowledged, to be believed, for people to take ownership if they had made a mistake, for mistakes not to occur again, and to receive an apology. For these initiatives to be fully realised, the paradigm shift must go beyond regurgitating complaints data metrics in percentages per patient contact, towards a concerted effort to evaluate what the qualitative complaints data is really saying. An opportunity to identify a more positive and proactive approach in encouraging our patients to complain when they are dissatisfied has the potential to influence improvements.
Resumo:
Nutrition interventions in the form of both self-management education and individualised diet therapy are considered essential for the long-term management of type 2 diabetes mellitus (T2DM). The measurement of diet is essential to inform, support and evaluate nutrition interventions in the management of T2DM. Barriers inherent within health care settings and systems limit ongoing access to personnel and resources, while traditional prospective methods of assessing diet are burdensome for the individual and often result in changes in typical intake to facilitate recording. This thesis investigated the inclusion of information and communication technologies (ICT) to overcome limitations to current approaches in the nutritional management of T2DM, in particular the development, trial and evaluation of the Nutricam dietary assessment method (NuDAM) consisting of a mobile phone photo/voice application to assess nutrient intake in a free-living environment with older adults with T2DM. Study 1: Effectiveness of an automated telephone system in promoting change in dietary intake among adults with T2DM The effectiveness of an automated telephone system, Telephone-Linked Care (TLC) Diabetes, designed to deliver self-management education was evaluated in terms of promoting dietary change in adults with T2DM and sub-optimal glycaemic control. In this secondary data analysis independent of the larger randomised controlled trial, complete data was available for 95 adults (59 male; mean age(±SD)=56.8±8.1 years; mean(±SD)BMI=34.2±7.0kg/m2). The treatment effect showed a reduction in total fat of 1.4% and saturated fat of 0.9% energy intake, body weight of 0.7 kg and waist circumference of 2.0 cm. In addition, a significant increase in the nutrition self-efficacy score of 1.3 (p<0.05) was observed in the TLC group compared to the control group. The modest trends observed in this study indicate that the TLC Diabetes system does support the adoption of positive nutrition behaviours as a result of diabetes self-management education, however caution must be applied in the interpretation of results due to the inherent limitations of the dietary assessment method used. The decision to use a close-list FFQ with known bias may have influenced the accuracy of reporting dietary intake in this instance. This study provided an example of the methodological challenges experienced with measuring changes in absolute diet using a FFQ, and reaffirmed the need for novel prospective assessment methods capable of capturing natural variance in usual intakes. Study 2: The development and trial of NuDAM recording protocol The feasibility of the Nutricam mobile phone photo/voice dietary record was evaluated in 10 adults with T2DM (6 Male; age=64.7±3.8 years; BMI=33.9±7.0 kg/m2). Intake was recorded over a 3-day period using both Nutricam and a written estimated food record (EFR). Compared to the EFR, the Nutricam device was found to be acceptable among subjects, however, energy intake was under-recorded using Nutricam (-0.6±0.8 MJ/day; p<0.05). Beverages and snacks were the items most frequently not recorded using Nutricam; however forgotten meals contributed to the greatest difference in energy intake between records. In addition, the quality of dietary data recorded using Nutricam was unacceptable for just under one-third of entries. It was concluded that an additional mechanism was necessary to complement dietary information collected via Nutricam. Modifications to the method were made to allow for clarification of Nutricam entries and probing forgotten foods during a brief phone call to the subject the following morning. The revised recording protocol was evaluated in Study 4. Study 3: The development and trial of the NuDAM analysis protocol Part A explored the effect of the type of portion size estimation aid (PSEA) on the error associated with quantifying four portions of 15 single foods items contained in photographs. Seventeen dietetic students (1 male; age=24.7±9.1 years; BMI=21.1±1.9 kg/m2) estimated all food portions on two occasions: without aids and with aids (food models or reference food photographs). Overall, the use of a PSEA significantly reduced mean (±SD) group error between estimates compared to no aid (-2.5±11.5% vs. 19.0±28.8%; p<0.05). The type of PSEA (i.e. food models vs. reference food photograph) did not have a notable effect on the group estimation error (-6.7±14.9% vs. 1.4±5.9%, respectively; p=0.321). This exploratory study provided evidence that the use of aids in general, rather than the type, was more effective in reducing estimation error. Findings guided the development of the Dietary Estimation and Assessment Tool (DEAT) for use in the analysis of the Nutricam dietary record. Part B evaluated the effect of the DEAT on the error associated with the quantification of two 3-day Nutricam dietary records in a sample of 29 dietetic students (2 males; age=23.3±5.1 years; BMI=20.6±1.9 kg/m2). Subjects were randomised into two groups: Group A and Group B. For Record 1, the use of the DEAT (Group A) resulted in a smaller error compared to estimations made without the tool (Group B) (17.7±15.8%/day vs. 34.0±22.6%/day, p=0.331; respectively). In comparison, all subjects used the DEAT to estimate Record 2, with resultant error similar between Group A and B (21.2±19.2%/day vs. 25.8±13.6%/day; p=0.377 respectively). In general, the moderate estimation error associated with quantifying food items did not translate into clinically significant differences in the nutrient profile of the Nutricam dietary records, only amorphous foods were notably over-estimated in energy content without the use of the DEAT (57kJ/day vs. 274kJ/day; p<0.001). A large proportion (89.6%) of the group found the DEAT helpful when quantifying food items contained in the Nutricam dietary records. The use of the DEAT reduced quantification error, minimising any potential effect on the estimation of energy and macronutrient intake. Study 4: Evaluation of the NuDAM The accuracy and inter-rater reliability of the NuDAM to assess energy and macronutrient intake was evaluated in a sample of 10 adults (6 males; age=61.2±6.9 years; BMI=31.0±4.5 kg/m2). Intake recorded using both the NuDAM and a weighed food record (WFR) was coded by three dietitians and compared with an objective measure of total energy expenditure (TEE) obtained using the doubly labelled water technique. At the group level, energy intake (EI) was under-reported to a similar extent using both methods, with the ratio of EI:TEE was 0.76±0.20 for the NuDAM and 0.76±0.17 for the WFR. At the individual level, four subjects reported implausible levels of energy intake using the WFR method, compared to three using the NuDAM. Overall, moderate to high correlation coefficients (r=0.57-0.85) were found across energy and macronutrients except fat (r=0.24) between the two dietary measures. High agreement was observed between dietitians for estimates of energy and macronutrient derived for both the NuDAM (ICC=0.77-0.99; p<0.001) and WFR (ICC=0.82-0.99; p<0.001). All subjects preferred using the NuDAM over the WFR to record intake and were willing to use the novel method again over longer recording periods. This research program explored two novel approaches which utilised distinct technologies to aid in the nutritional management of adults with T2DM. In particular, this thesis makes a significant contribution to the evidence base surrounding the use of PhRs through the development, trial and evaluation of a novel mobile phone photo/voice dietary record. The NuDAM is an extremely promising advancement in the nutritional management of individuals with diabetes and other chronic conditions. Future applications lie in integrating the NuDAM with other technologies to facilitate practice across the remaining stages of the nutrition care process.
Resumo:
This paper documents the use of bibliometrics as a methodology to bring forth a structured, systematic and rigorous way to analyse and evaluate a range of literature. When starting out and reading broadly for my doctoral studies, one article by Trigwell and Prosser (1996b) led me to reflect about my level of comprehension as the content, concepts and methodology did not resonate with my epistemology. A disconnection between our paradigms emerged. Further reading unveiled the work by Doyle (1987) who categorised research in teaching and teacher education by three main areas: teacher characteristics, methods research and teacher behaviour. My growing concerns that there were gaps in the knowledge also exposed the difficulties in documenting said gaps. As an early researcher who required support to locate myself in the field and to find my research voice, I identified bibliometrics (Budd, 1988; Yeoh & Kaur, 2007) as an appropriate methodology to add value and rigour in three ways. Firstly, the application of bibliometrics to analyse articles is systematic, builds a picture from the characteristics of the literature, and offers a way to elicit themes within the categories. Secondly, by systematic analysis there is occasion to identify gaps within the body of work, limitations in methodology or areas in need of further research. Finally, extension and adaptation of the bibliometrics methodology, beyond citation or content analysis, to investigate the merit of methodology, participants and instruments as a determinant for research worth allowed the researcher to build confidence and contribute new knowledge to the field. Therefore, this paper frames research in the pedagogic field of Higher Education through teacher characteristics, methods research and teacher behaviour, visually represents the literature analysis and locates my research self within methods research. Through my research voice I will present the bibliometrics methodology, the outcomes and document the landscape of pedagogy in the field of Higher Education.
Resumo:
Content analysis of text offers a method for exploring experiences which usually remain unquestioned and unexamined. In this paper the authors analyse a set of patient progress notes by re-framing them as a narrative account of a significant event in the experience of a patient, her family and attending health care workers. Examination of these notes provides insights into aspects of clinical practice which are usually dealt with at a taken-for-granted level. An interpretation of previously unexamined therapeutic practices within the social and political context of institutional health care is offered.
Resumo:
Smartphones become very critical part of our lives as they offer advanced capabilities with PC-like functionalities. They are getting widely deployed while not only being used for classical voice-centric communication. New smartphone malwares keep emerging where most of them still target Symbian OS. In the case of Symbian OS, application signing seemed to be an appropriate measure for slowing down malware appearance. Unfortunately, latest examples showed that signing can be bypassed resulting in new malware outbreak. In this paper, we present a novel approach to static malware detection in resource-limited mobile environments. This approach can be used to extend currently used third-party application signing mechanisms for increasing malware detection capabilities. In our work, we extract function calls from binaries in order to apply our clustering mechanism, called centroid. This method is capable of detecting unknown malwares. Our results are promising where the employed mechanism might find application at distribution channels, like online application stores. Additionally, it seems suitable for directly being used on smartphones for (pre-)checking installed applications.
Resumo:
The nature and value of ‘professionalism’ has long been contested by both producers and consumers of policy. Most recently, governments have rewritten and redefined professionalism as compliance with externally imposed ‘standards’. This has been achieved by silencing the voices of those who inhabit the professional field of education. This paper uses Foucauldian archaeology to excavate the enunciative field of professionalism by digging through the academic and institutional (political) archive, and in doing so identifies two key policy documents for further analysis. The excavation shows that while the voices of (academic) authority speak of competing discourses emerging, with professional standards promulgated as the mechanism to enhance professionalism, an alternative regime of truth identifies the privileged use of (managerial) voices from outside the field of education to create a discourse of compliance. There has long been a mismatch between the voices of authority on discourses around professionalism from the academic archive and those that count in contemporary and emerging Australian educational policy. In this paper, we counter this mismatch and argue that reflexive educators’ regimes of truth are worthy of attention and should be heard and amplified.
Resumo:
Police in-vehicle systems include a visual output mobile data terminal (MDT) with manual input via touch screen and keyboard. This study investigated the potential for voice-based input and output modalities for reducing subjective workload of police officers while driving. Nineteen experienced drivers of police vehicles (one female) from New South Wales (NSW) Police completed four simulated urban drives. Three drives included a concurrent secondary task: an imitation licence number search using an emulated MDT. Three different interface output-input modalities were examined: Visual-Manual, Visual-Voice, and Audio-Voice. Following each drive, participants rated their subjective workload using the NASA - Raw Task Load Index and completed questions on acceptability. A questionnaire on interface preferences was completed by participants at the end of their session. Engaging in secondary tasks while driving significantly increased subjective workload. The Visual-Manual interface resulted in higher time demand than either of the voice-based interfaces and greater physical demand than the Audio-Voice interface. The Visual-Voice and Audio-Voice interfaces were rated easier to use and more useful than the Visual-Manual interface, although not significantly different from each other. Findings largely echoed those deriving from the analysis of the objective driving performance data. It is acknowledged that under standard procedures, officers should not drive while performing tasks concurrently with certain invehicle policing systems; however, in practice this sometimes occurs. Taking action now to develop voice-based technology for police in-vehicle systems has potential to realise visions for potentially safer and more efficient vehicle-based police work.