854 resultados para Detached Utterances
Resumo:
Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.
Resumo:
Traditional speech enhancement methods optimise signal-level criteria such as signal-to-noise ratio, but these approaches are sub-optimal for noise-robust speech recognition. Likelihood-maximising (LIMA) frameworks are an alternative that optimise parameters of enhancement algorithms based on state sequences generated for utterances with known transcriptions. Previous reports of LIMA frameworks have shown significant promise for improving speech recognition accuracies under additive background noise for a range of speech enhancement techniques. In this paper we discuss the drawbacks of the LIMA approach when multiple layers of acoustic mismatch are present – namely background noise and speaker accent. Experimentation using LIMA-based Mel-filterbank noise subtraction on American and Australian English in-car speech databases supports this discussion, demonstrating that inferior speech recognition performance occurs when a second layer of mismatch is seen during evaluation.
Resumo:
Traditional speech enhancement methods optimise signal-level criteria such as signal-to-noise ratio, but such approaches are sub-optimal for noise-robust speech recognition. Likelihood-maximising (LIMA) frameworks on the other hand, optimise the parameters of speech enhancement algorithms based on state sequences generated by a speech recogniser for utterances of known transcriptions. Previous applications of LIMA frameworks have generated a set of global enhancement parameters for all model states without taking in account the distribution of model occurrence, making optimisation susceptible to favouring frequently occurring models, in particular silence. In this paper, we demonstrate the existence of highly disproportionate phonetic distributions on two corpora with distinct speech tasks, and propose to normalise the influence of each phone based on a priori occurrence probabilities. Likelihood analysis and speech recognition experiments verify this approach for improving ASR performance in noisy environments.
Resumo:
The aim of this project was to investigate the in vitro osteogenic potential of human mesenchymal progenitor cells in novel matrix architectures built by means of a three-dimensional bioresorbable synthetic framework in combination with a hydrogel. Human mesenchymal progenitor cells (hMPCs) were isolated from a human bone marrow aspirate by gradient centrifugation. Before in vitro engineering of scaffold-hMPC constructs, the adipogenic and osteogenic differentiation potential was demonstrated by staining of neutral lipids and induction of bone-specific proteins, respectively. After expansion in monolayer cultures, the cells were enzymatically detached and then seeded in combination with a hydrogel into polycaprolactone (PCL) and polycaprolactone-hydroxyapatite (PCL-HA) frameworks. This scaffold design concept is characterized by novel matrix architecture, good mechanical properties, and slow degradation kinetics of the framework and a biomimetic milieu for cell delivery and proliferation. To induce osteogenic differentiation, the specimens were cultured in an osteogenic cell culture medium and were maintained in vitro for 6 weeks. Cellular distribution and viability within three-dimensional hMPC bone grafts were documented by scanning electron microscopy, cell metabolism assays, and confocal laser microscopy. Secretion of the osteogenic marker molecules type I procollagen and osteocalcin was analyzed by semiquantitative immunocytochemistry assays. Alkaline phosphatase activity was visualized by p-nitrophenyl phosphate substrate reaction. During osteogenic stimulation, hMPCs proliferated toward and onto the PCL and PCL-HA scaffold surfaces and metabolic activity increased, reaching a plateau by day 15. The temporal pattern of bone-related marker molecules produced by in vitro tissue-engineered scaffold-cell constructs revealed that hMPCs differentiated better within the biomimetic matrix architecture along the osteogenic lineage.
Resumo:
Zero energy buildings (ZEB) and zero energy homes (ZEH) are a current hot topic globally for policy makers (what are the benefits and costs), designers (how do we design them), the construction industry (can we build them), marketing (will consumers buy them) and researchers (do they work and what are the implications). This paper presents initial findings from actual measured data from a 9 star (as built), off-ground detached family home constructed in south-east Queensland in 2008. The integrated systems approach to the design of the house is analysed in each of its three main goals: maximising the thermal performance of the building envelope, minimising energy demand whilst maintaining energy service levels, and implementing a multi-pronged low carbon approach to energy supply. The performance outcomes of each of these stages are evaluated against definitions of Net Zero Carbon / Net Zero Emissions (Site and Source) and Net Zero Energy (onsite generation v primary energy imports). The paper will conclude with a summary of the multiple benefits of combining very high efficiency building envelopes with diverse energy management strategies: a robustness, resilience, affordability and autonomy not generally seen in housing.
Resumo:
Emotions play a central role in mediation as they help to define the scope and direction of a conflict. When a party to mediation expresses (and hence entrusts) their emotions to those present in a mediation, a mediator must do more than simply listen - they must attend to these emotions. Mediator empathy is an essential skill for communicating to a party that their feelings have been heard and understood, but it can lead mediators into trouble. Whilst there might exist a theoretical divide between the notions of empathy and sympathy, the very best characteristics of mediators (caring and compassionate nature) may see empathy and sympathy merge - resulting in challenges to mediator neutrality. This article first outlines the semantic difference between empathy and sympathy and the role that intrapsychic conflict can play in the convergence of these behavioural phenomena. It then defines emotional intelligence in the context of a mediation, suggesting that only the most emotionally intelligent mediators are able to emotionally connect with the parties, but maintain an impression of impartiality – the quality of remaining ‘attached yet detached’ to the process. It is argued that these emotionally intelligent mediators have the common qualities of strong self-awareness and emotional self-regulation.
Resumo:
Designed for independent living, retirement villages provide either detached or semi-detached residential dwellings with car parking and small private yards. Retirement village developments usually include a mix of independent living units (ILUs) and serviced apartments (SAs) with community facilities providing a shared congregational area for village activities and socialising. Retirement Village assets differ from traditional residential assets due to their operation in accordance with statutory legislation. In Australia, each State and Territory has its own Retirement Village Act and Regulations. In essence, the village operator provides the land and buildings to the residents who pay an amount on entry for the right of occupation. On departure from the units an agreed proportion of either the original purchase price or the sale price is paid to the outgoing resident. The market value of the operator’s interest in the Retirement Village is therefore based upon the estimated future income from Deferred Management Fees and Capital Gain upon roll-over receivable by the operator in accordance with the respective residency agreements. Given the lumpiness of these payments, there is general acceptance that the most appropriate approach to valuation is through Discounted Cash Flow (DCF) analysis. There is however inconsistency between valuers across Australia in how they undertake their DCF analysis, leading to differences in reported values and subsequent confusion among users of valuation services. To give guidance to valuers and enhance confidence from users of valuation services this paper investigates the five major elements of discounted cash flow methodology, namely cash flows, escalation factors, holding period, terminal value and discount rate. Whilst there is dissatisfaction with the financial structuring of the DMF in residency agreements, as long as there are future financial returns receivable by the Village owner/operator, then DCF will continue to be the most appropriate valuation methodology for resident funded retirement villages.
Resumo:
Retirement village assets are different from traditional residential assets due to their operation in accordance with statutory legislation. Designed for independent living, retirement villages provide either detached or semi-detached residential dwellings with car parking and small private yards with community facilities providing a shared congregational area for village activities and socialising. In essence, the village operator provides the land and buildings to the residents who pay an amount on entry for the right of occupation. On departure from the units an agreed proportion of either the original purchase price or the sale price is paid to the outgoing resident. As ongoing levies are typically offset by ongoing operational expenses the market value of the operator's interest in the retirement village is therefore predominantly based upon the estimated future income from deferred management fees and capital gain upon roll-over receivable by the operator in accordance with the respective residency agreements. Given the lumpiness of these payments, there is general acceptance that the most appropriate approach to valuation is through discounted cash flow (DCF) analysis. There is however inconsistency between valuers across Australia in how they undertake their DCF analysis, leading to differences in reported values and subsequent confusion among users of valuation services. To give guidance to valuers and enhance confidence from users of valuation services this paper investigates the five major elements of DCF methodology, namely cash flows, escalation factors, holding period, terminal value and discount rate.
Resumo:
Explanations of the role of analogies in learning science at a cognitive level are made in terms of creating bridges between new information and students’ prior knowledge. In this empirical study of learning with analogies in an 11th grade chemistry class, we explore an alternative explanation at the "social" level where analogy shapes classroom discourse. Students in the study developed analogies within small groups and with their teacher. These classroom interactions were monitored to identify changes in discourse that took place through these activities. Beginning from socio-cultural perspectives and hybridity, we investigated classroom discourse during analogical activities. From our analyses, we theorized a merged discourse that explains how the analog discourse becomes intertwined with the target discourse generating a transitional state where meanings, signs, symbols, and practices are in flux. Three categories were developed that capture how students intertwined the analog and target discourses—merged words, merged utterances/sentences, and merged practices.
Resumo:
We report on analysis of discussions in an online community of people with chronic illness using socio-cognitively motivated, automatically produced semantic spaces. The analysis aims to further the emerging theory of "transition" (how people can learn to incorporate the consequences of illness into their lives). An automatically derived representation of sense of self for individuals is created in the semantic space by the analysis of the email utterances of the community members. The movement over time of the sense of self is visualised, via projection, with respect to axes of "ordinariness" and "extra-ordinariness". Qualitative evaluation shows that the visualisation is paralleled by the transitions of people during the course of their illness. The research aims to progress tools for analysis of textual data to promote greater use of tacit knowledge as found in online virtual communities. We hope it also encourages further interest in representation of sense-of-self.
Resumo:
Zero energy buildings (ZEB) and zero energy homes (ZEH) are a current hot topic globally for policy makers (what are the benefits and costs), designers (how do we design them), the construction industry (can we build them), marketing (will consumers buy them) and researchers (do they work and what are the implications). This paper presents initial findings from actual measured data from a 9 star (as built), off-ground detached family home constructed in south-east Queensland in 2008. The integrated systems approach to the design of the house is analysed in each of its three main goals: maximising the thermal performance of the building envelope, minimising energy demand whilst maintaining energy service levels, and implementing a multi-pronged low carbon approach to energy supply. The performance outcomes of each of these stages are evaluated against definitions of Net Zero Carbon / Net Zero Emissions (Site and Source) and Net Zero Energy (onsite generation vs primary energy imports). The paper will conclude with a summary of the multiple benefits of combining very high efficiency building envelopes with diverse energy management strategies: a robustness, resilience, affordability and autonomy not generally seen in housing.
Resumo:
A century ago, as the Western world embarked on a period of traumatic change, the visual realism of photography and documentary film brought print and radio news to life. The vision that these new mediums threw into stark relief was one of intense social and political upheaval: the birth of modernity fired and tempered in the crucible of the Great War. As millions died in this fiery chamber and the influenza pandemic that followed, lines of empires staggered to their fall, and new geo-political boundaries were scored in the raw, red flesh of Europe. The decade of 1910 to 1919 also heralded a prolific period of artistic experimentation. It marked the beginning of the social and artistic age of modernity and, with it, the nascent beginnings of a new art form: film. We still live in the shadow of this violent, traumatic and fertile age; haunted by the ghosts of Flanders and Gallipoli and its ripples of innovation and creativity. Something happened here, but to understand how and why is not easy; for the documentary images we carry with us in our collective cultural memory have become what Baudrillard refers to as simulacra. Detached from their referents, they have become referents themselves, to underscore other, grand narratives in television and Hollywood films. The personal histories of the individuals they represent so graphically–and their hope, love and loss–are folded into a national story that serves, like war memorials and national holidays, to buttress social myths and values. And, as filmic images cross-pollinate, with each iteration offering a new catharsis, events that must have been terrifying or wondrous are abstracted. In this paper we first discuss this transformation through reference to theories of documentary and memory–this will form a conceptual framework for a subsequent discussion of the short film Anmer. Produced by the first author in 2010, Anmer is a visual essay on documentary, simulacra and the symbolic narratives of history. Its form, structure and aesthetic speak of the confluence of documentary, history, memory and dream. Located in the first decade of the twentieth century, its non-linear narratives of personal tragedy and poetic dreamscapes are an evocative reminder of the distance between intimate experience, grand narratives, and the mythologies of popular films. This transformation of documentary sources not only played out in the processes of the film’s production, but also came to form its theme.
Resumo:
Fusion techniques have received considerable attention for achieving lower error rates with biometrics. A fused classifier architecture based on sequential integration of multi-instance and multi-sample fusion schemes allows controlled trade-off between false alarms and false rejects. Expressions for each type of error for the fused system have previously been derived for the case of statistically independent classifier decisions. It is shown in this paper that the performance of this architecture can be improved by modelling the correlation between classifier decisions. Correlation modelling also enables better tuning of fusion model parameters, ‘N’, the number of classifiers and ‘M’, the number of attempts/samples, and facilitates the determination of error bounds for false rejects and false accepts for each specific user. Error trade-off performance of the architecture is evaluated using HMM based speaker verification on utterances of individual digits. Results show that performance is improved for the case of favourable correlated decisions. The architecture investigated here is directly applicable to speaker verification from spoken digit strings such as credit card numbers in telephone or voice over internet protocol based applications. It is also applicable to other biometric modalities such as finger prints and handwriting samples.
Resumo:
This study investigated Chinese College English students. perceptions of pragmatics, their pragmatic competence in selected speech acts, strategies they employed in acquiring pragmatic knowledge, as well as their general approach to learning English as a foreign language. The research was triggered by a national curriculum initiative that prioritizes the need for College English students to enhance their ability to use English effectively in different social interactions (Chinese College English Education and Supervisory Committee, 2007). The traditional "grammar-translation" and "examination-oriented" method is believed to have reduced Chinese College English students to what is dubbed "mute" and "deaf" language learners (Zhang, 2008; Zhao, 2009). Many students lack pragmatic knowledge on how to interpret discourse by relating utterances to their meanings, understanding the intention of language users, and how language is used in specific settings (Bachman & Palmer, 1996, 2010). There is an increasing body of literature on awareness-raising of the importance of pragmatic knowledge and strategies for classroom instruction. However, to date, researchers have tended to focus largely on the teaching of pragmatics, rather than on how students acquire pragmatic competence (Bardovi-Harlig & Dornyei, 1998; Du, 2004; Hou, 2007; Ruan, 2007; Schauer, 2009). It is this gap in the research that this study fills, with a focus on different types of pragmatic knowledge, learner perceptions of such knowledge, and learning strategies that College English students employ in the process of learning English in general, and pragmatics in particular. Three strands of theories of second language acquisition (Ellis, 1985, 1994): pragmatics (Levinson, 1983; Mey, 2001; Yule, 1996), intercultural communications (Kramsch, 1998; Samovar & Porter, 1997; Samovar, Porter & McDaniel, 2009) and English as a lingua franca (ELF) (Canagarajah, 2006; Firth, 1996; Pennycook, 2010) were employed to establish a conceptual framework for data collection and analyses. Key constructs derived from the three related theories helped to form a typology for a detailed examination and theorization of the empirical evidence gathered from different sources. Four research instruments: a questionnaire (N=237), Discourse Completion Tasks (DCTs) (N=55), focus group interviews (N=18), and a textbook tasks analysis were employed to collect data for this systematic inquiry. Data collected by different instruments were analyzed and compared by way of a triangulation to enhance its validity and reliability. Major findings derived from different sources highlighted that, although College English students were grammatically advanced language learners, they displayed limited pragmatic knowledge and a highly restricted repertoire of language learning strategies. The majority of the respondents, however, believed that pragmatic knowledge was as important as linguistic knowledge in the process of developing communicative competence for interaction in different contexts. It was argued that a combination of a less than sufficient English proficiency, limited knowledge of pragmatics, inadequate language materials and tasks, and a small stock of language learning strategies, were a major hindrance to effective learning and communication, resulting in pragmatic failures in many intercultural communication situations. As the first systematic study of how Chinese College English students learned pragmatics, the research provided a solid empirical base for developing a tentative model for the learning of pragmatics in a College English classroom in China and similar educational contexts. The model was strengthened by a unique combination of theories of pragmatics, intercultural communication and ELF. Findings from this research provided insights into how Chinese College English students perceived pragmatics in the English as foreign language (EFL) curriculum, the processes of learning, as well as strategies they utilized in developing linguistic and pragmatic knowledge and competence.
Resumo:
The current rapid urban growth throughout the world manifests in various ways and historically cities have grown, similarly, alternately or simultaneously between planned extensions and organic informal settlements (Mumford, 1989). Within cities different urban morphological regions can reveal different contexts of economic growth and/or periods of dramatic social/technological change (Whitehand, 2001, 105). Morpho-typological study of alternate contexts can present alternative models and contribute to the present discourse which questions traditional paradigms of urban planning and design (Todes et al, 2010). In this study a series of cities are examined as a preliminary exploration into the urban morphology of cities in ‘humid subtropical’ climates. From an initial set of twenty, six cities were selected: Sao Paulo, Brazil; Jacksonville, USA; Maputo, Mozambique; Kanpur, India; Hong Kong, China; and Brisbane, Australia. The urban form was analysed from satellite imagery at a constant scale. Urban morphological regions (types) were identified as those demonstrating particular consistant characteristics of form (density, typology and pattern) different to their surroundings when examined at a constant scale. This analysis was correlated against existing data and literature discussing the proliferation of two types of urban development, ‘informal settlement’ (defined here as self-organised communities identifiable but not always synonymous with ‘slums’) and ‘suburbia’ (defined here as master planned communities of generally detached houses prevalent in western society) - the extreme ends of a hypothetical spectrum from ‘planned’ to ‘spontaneous’ urban development. Preliminary results show some cities contain a wide variety of urban form ranging from the highly organic ‘self-organised’ type to the highly planned ‘master planned community’ (in the case of Sao Paulo) while others tend to fall at one end of the planning spectrum or the other (more planned in the cases of Brisbane and Jacksonville; and both highly planned and highly organic in the case of Maputo). Further research will examine the social, economical and political drivers and controls which lead to this diversity or homogeneity of urban form and speculates on the role of self-organisation as a process for the adaptation of urban form.