262 resultados para litter mixture
Resumo:
Gaussian mixture models (GMMs) have become an established means of modeling feature distributions in speaker recognition systems. It is useful for experimentation and practical implementation purposes to develop and test these models in an efficient manner particularly when computational resources are limited. A method of combining vector quantization (VQ) with single multi-dimensional Gaussians is proposed to rapidly generate a robust model approximation to the Gaussian mixture model. A fast method of testing these systems is also proposed and implemented. Results on the NIST 1996 Speaker Recognition Database suggest comparable and in some cases an improved verification performance to the traditional GMM based analysis scheme. In addition, previous research for the task of speaker identification indicated a similar system perfomance between the VQ Gaussian based technique and GMMs
An approach to statistical lip modelling for speaker identification via chromatic feature extraction
Resumo:
This paper presents a novel technique for the tracking of moving lips for the purpose of speaker identification. In our system, a model of the lip contour is formed directly from chromatic information in the lip region. Iterative refinement of contour point estimates is not required. Colour features are extracted from the lips via concatenated profiles taken around the lip contour. Reduction of order in lip features is obtained via principal component analysis (PCA) followed by linear discriminant analysis (LDA). Statistical speaker models are built from the lip features based on the Gaussian mixture model (GMM). Identification experiments performed on the M2VTS1 database, show encouraging results
Resumo:
This thesis investigates profiling and differentiating customers through the use of statistical data mining techniques. The business application of our work centres on examining individuals’ seldomly studied yet critical consumption behaviour over an extensive time period within the context of the wireless telecommunication industry; consumption behaviour (as oppose to purchasing behaviour) is behaviour that has been performed so frequently that it become habitual and involves minimal intentions or decision making. Key variables investigated are the activity initialised timestamp and cell tower location as well as the activity type and usage quantity (e.g., voice call with duration in seconds); and the research focuses are on customers’ spatial and temporal usage behaviour. The main methodological emphasis is on the development of clustering models based on Gaussian mixture models (GMMs) which are fitted with the use of the recently developed variational Bayesian (VB) method. VB is an efficient deterministic alternative to the popular but computationally demandingMarkov chainMonte Carlo (MCMC) methods. The standard VBGMMalgorithm is extended by allowing component splitting such that it is robust to initial parameter choices and can automatically and efficiently determine the number of components. The new algorithm we propose allows more effective modelling of individuals’ highly heterogeneous and spiky spatial usage behaviour, or more generally human mobility patterns; the term spiky describes data patterns with large areas of low probability mixed with small areas of high probability. Customers are then characterised and segmented based on the fitted GMM which corresponds to how each of them uses the products/services spatially in their daily lives; this is essentially their likely lifestyle and occupational traits. Other significant research contributions include fitting GMMs using VB to circular data i.e., the temporal usage behaviour, and developing clustering algorithms suitable for high dimensional data based on the use of VB-GMM.
Resumo:
This paper proposes the use of eigenvoice modeling techniques with the Cross Likelihood Ratio (CLR) as a criterion for speaker clustering within a speaker diarization system. The CLR has previously been shown to be a robust decision criterion for speaker clustering using Gaussian Mixture Models. Recently, eigenvoice modeling techniques have become increasingly popular, due to its ability to adequately represent a speaker based on sparse training data, as well as an improved capture of differences in speaker characteristics. This paper hence proposes that it would be beneficial to capitalize on the advantages of eigenvoice modeling in a CLR framework. Results obtained on the 2002 Rich Transcription (RT-02) Evaluation dataset show an improved clustering performance, resulting in a 35.1% relative improvement in the overall Diarization Error Rate (DER) compared to the baseline system.
Resumo:
Visual activity detection of lip movements can be used to overcome the poor performance of voice activity detection based solely in the audio domain, particularly in noisy acoustic conditions. However, most of the research conducted in visual voice activity detection (VVAD) has neglected addressing variabilities in the visual domain such as viewpoint variation. In this paper we investigate the effectiveness of the visual information from the speaker’s frontal and profile views (i.e left and right side views) for the task of VVAD. As far as we are aware, our work constitutes the first real attempt to study this problem. We describe our visual front end approach and the Gaussian mixture model (GMM) based VVAD framework, and report the experimental results using the freely available CUAVE database. The experimental results show that VVAD is indeed possible from profile views and we give a quantitative comparison of VVAD based on frontal and profile views The results presented are useful in the development of multi-modal Human Machine Interaction (HMI) using a single camera, where the speaker’s face may not always be frontal.
Resumo:
Notions of capture and manipulation infer the existence of an interface that combines performer with system, an interface that separates (or intervenes in) the space between performance, intention and history. It is precisely the effect and provided opportunity of the intermediary device on the practice and craft of the actor, that is particularly examined in this work. Defining the scope of current practice for the contemporary actor is a key construct of this challenge with the most appropriate definition revolving around the pursuit of providing a required mixture of performance and content for live, mediated, framed and variously captured formats that exist in the present day. One of these particular formats is Performance Capture which this paper interrogate in more detail.
Resumo:
Sustainable transport has become a necessity instead of an option, to address the problems of congestion and urban sprawl, whose effects include increased trip lengths and travel time. A more sustainable form of development, known as Transit Oriented Development (TOD) is presumed to offer sustainable travel choices with reduced need to travel to access daily destinations, by providing a mixture of land uses together with good quality of public transport service, infrastructure for walking and cycling. However, performance assessment of these developments with respect to travel characteristics of their inhabitants is required. This research proposes a five step methodology for evaluating the transport impacts of TODs. The steps for TOD evaluation include pre–TOD assessment, traffic and travel data collection, determination of traffic impacts, determination of travel impacts, and drawing outcomes. Typically, TODs are comprised of various land uses; hence have various types of users. Assessment of characteristics of all user groups is essential for obtaining an accurate picture of transport impacts. A case study TOD, Kelvin Grove Urban Village (KGUV), located 2km of north west of the Brisbane central business district in Australia was selected for implementing the proposed methodology and to evaluate the transport impacts of a TOD from an Australian perspective. The outcomes of this analysis indicated that KGUV generated 27 to 48 percent less traffic compared to standard published rates specified for homogeneous uses. Further, all user groups of KGUV used more sustainable modes of transport compared to regional and similarly located suburban users, with higher trip length for shopping and education trips. Although the results from this case study development support the transport claims of reduced traffic generation and sustainable travel choices by way of TODs, further investigation is required, considering different styles, scales and locations of TODs. The proposed methodology may be further refined by using results from new TODs and a framework for TOD evaluation may be developed.
Resumo:
Disposal of mud and ash, particularly in wet weather conditions, is a significant expense for mills. This paper reports on one part of a process to pelletise mud and ash, aimed at making mud and ash more attractive to growers across entire mill districts. The full process is described in a separate paper. The part described in this paper involves re-constituting mud cake from the filter station at Tully Mill and processing it in a decanter centrifuge. The material produced by re-constituting and centrifuging is drier and made up of separate particles. The material needs to mix easily with boiler ash, and the mixture needs to be fed easily into a flue gas drier to be dried to low moisture. The results achieved with the particular characteristics of Tully Mill rotary vacuum filter cake are presented. It was found that an internal rotor with a 20º beach was not adequate to process re-constituted rotary vacuum filter mud. A rotor with a 10º beach worked much more successfully. A total of four tonnes of centrifuged mud with a moisture content ranging from 60% to 65% was produced. It was found that the torque, flocculant rate and dose rate had a statistically significant effect on the moisture content. Feed rate did not have a noticeable impact on the moisture content by itself but torque had a much larger impact on the moisture content at the low feed rate than at the high feed rate. These results indicated that the moisture content of the mud can most likely be reduced with low feed rate, low flocculant rate, high dose rate and high torque. One issue that is believed to affect the operation of a decanter centrifuge was the large quantity of long bagasse fibres in the rotary vacuum filter mud. It is likely that the long fibres limited the throughput of the centrifuge and the moisture achieved.
Resumo:
Flow regime transition criteria are of practical importance for two-phase flow analyses at reduced gravity conditions. Here, flow regime transition criteria which take the friction pressure loss effect into account were studied in detail. Criteria at reduced gravity conditions were developed by extending an existing model with various experimental datasets taken at microgravity conditions showed satisfactory agreement. Sample computations of the model were performed at various gravity conditions, such as 0.196, 1.62, 3.71, and 9.81 m/s2 corresponding to micro-gravity and lunar, Martian and Earth surface gravity, respectively. It was found that the effect of gravity on bubbly-slug and slug-annular (churn) transitions in a two-phase flow system was more pronounced at low liquid flow conditions, whereas the gravity effect could be ignored at high mixture volumetric flux conditions. While for the annular flow transitions due to flow reversal and onset of dropset entrainment, higher superficial gas velocity was obtained at higher gravity level.
Resumo:
The formation of hypertrophic scars is a frequent outcome of wound repair and often requires further therapy with treatments such as silicone gel sheets (SGS; Perkins et al., 1983). Although widely used, knowledge regarding SGS and their mechanism of action on hypertrophic scars is limited. Furthermore, SGS require consistent application for at least twelve hours a day for up to twelve consecutive months, beginning as soon as wound reepithelialisation has occurred. Preliminary research at QUT has shown that some species of silicone present in SGS have the ability to permeate into collagen gel skin mimetics upon exposure. An analogue of these species, GP226, was found to decrease both collagen synthesis and the total amount of collagen present following exposure to cultures of cells derived from hypertrophic scars. This silicone of interest was a crude mixture of silicone species, which resolved into five fractions of different molecular weight. These five fractions were found to have differing effects on collagen synthesis and cell viability following exposure to fibroblasts derived from hypertrophic scars (HSF), keloid scars (KF) and normal skin (nHSF and nKF). The research performed herein continues to further assess the potential of GP226 and its fractions for scar remediation by determining in more detail its effects on HSF, KF, nHSF, nKF and human keratinocytes (HK) in terms of cell viability and proliferation at various time points. Through these studies it was revealed that Fraction IV was the most active fraction as it induced a reduction in cell viability and proliferation most similar to that observed with GP226. Cells undergoing apoptosis were also detected in HSF cultures exposed to GP226 and Fraction IV using the Tunel assay (Roche). These investigations were difficult to pursue further as the fractionation process used for GP226 was labour-intensive and time inefficient. Therefore a number of silicones with similar structure to Fraction IV were synthesised and screened for their effect following application to HSF and nHSF. PDMS7-g-PEG7, a silicone-PEG copolymer of low molecular weight and low hydrophilic-lipophilic balance factor, was found to be the most effective at reducing cell proliferation and inducing apoptosis in cultures of HSF, nHSF and HK. Further studies investigated gene expression through microarray and superarray techniques and demonstrated that many genes are differentially expressed in HSF following treatment with GP226, Fraction IV and PDMS7-g-PEG7. In brief, it was demonstrated that genes for TGFβ1 and TNF are not differentially regulated while genes for AIFM2, IL8, NSMAF, SMAD7, TRAF3 and IGF2R show increased expression (>1.8 fold change) following treatment with PDMS7-g-PEG7. In addition, genes for αSMA, TRAF2, COL1A1 and COL3A1 have decreased expression (>-1.8 fold change) following treatment with GP226, Fraction IV and PDMS7-g-PEG7. The data obtained suggest that many different pathways related to apoptosis and collagen synthesis are affected in HSF following exposure to PDMS7-g-PEG7. The significance is that silicone-PEG copolymers, such as GP226, Fraction IV and PDMS7-g-PEG7, could potentially be a non-invasive substitute to apoptosis-inducing chemical agents that are currently used as scar treatments. It is anticipated that these findings will ultimately contribute to the development of a novel scar therapy with faster action and improved outcomes for patients suffering from hypertrophic scars.
Resumo:
The growth of technologies and tools branded as =new media‘ or =Web 2.0‘ has sparked much discussion about the internet and its place in all facets of social life. Such debate includes the potential for blogs and citizen journalism projects to replace or alter journalism and mainstream media practices. However, while the journalism-blog dynamic has attracted the most attention, the actual work of political bloggers, the roles they play in the mediasphere and the resources they use, has been comparatively ignored. This project will look at political blogging in Australia and France - sites commenting on or promoting political events and ideas, and run by citizens, politicians, and journalists alike. In doing so, the structure of networks formed by bloggers and the nature of communication within political blogospheres will be examined. Previous studies of political blogging around the world have focussed on individual nations, finding that in some cases the networks are divided between different political ideologies. By comparing two countries with different political representation (two-party dominated system vs. a wider political spectrum), this study will determine the structure of these political blogospheres, and correlate these structures with the political environment in which they are situated. The thesis adapts concepts from communication and media theories, including framing, agenda setting, and opinion leaders, to examine the work of political bloggers and their place within the mediasphere. As well as developing a hybrid theoretical base for research into blogs and other online communication, the project outlines new methodologies for carrying out studies of online activity through the analysis of several topical networks within the wider activity collected for this project. The project draws on hyperlink and textual data collected from a sample of Australian and French blogs between January and August 2009. From this data, the thesis provides an overview of =everyday‘ political blogging, showing posting patterns over several months of activity, away from national elections and their associated campaigns. However, while other work in this field has looked solely at cumulative networks, treating collected data as a static network, this project will also look at specific cases to see how the blogospheres change with time and topics of discussion. Three case studies are used within the thesis to examine how blogs cover politics, featuring an international political event (the Obama inauguration), and local political topics (the opposition to the =Création et Internet‘, or HADOPI, law in France, the =Utegate‘ scandal in Australia). By using a mixture of qualitative and quantitative methods, the study analyses data collected from a population of sites from both countries, looking at their linking patterns, relationship with mainstream media, and topics of interest. This project will subsequently help to further develop methodologies in this field and provide new and detailed information on both online networks and internet-based political communication in Australia and France.
Resumo:
Mixture models are a flexible tool for unsupervised clustering that have found popularity in a vast array of research areas. In studies of medicine, the use of mixtures holds the potential to greatly enhance our understanding of patient responses through the identification of clinically meaningful clusters that, given the complexity of many data sources, may otherwise by intangible. Furthermore, when developed in the Bayesian framework, mixture models provide a natural means for capturing and propagating uncertainty in different aspects of a clustering solution, arguably resulting in richer analyses of the population under study. This thesis aims to investigate the use of Bayesian mixture models in analysing varied and detailed sources of patient information collected in the study of complex disease. The first aim of this thesis is to showcase the flexibility of mixture models in modelling markedly different types of data. In particular, we examine three common variants on the mixture model, namely, finite mixtures, Dirichlet Process mixtures and hidden Markov models. Beyond the development and application of these models to different sources of data, this thesis also focuses on modelling different aspects relating to uncertainty in clustering. Examples of clustering uncertainty considered are uncertainty in a patient’s true cluster membership and accounting for uncertainty in the true number of clusters present. Finally, this thesis aims to address and propose solutions to the task of comparing clustering solutions, whether this be comparing patients or observations assigned to different subgroups or comparing clustering solutions over multiple datasets. To address these aims, we consider a case study in Parkinson’s disease (PD), a complex and commonly diagnosed neurodegenerative disorder. In particular, two commonly collected sources of patient information are considered. The first source of data are on symptoms associated with PD, recorded using the Unified Parkinson’s Disease Rating Scale (UPDRS) and constitutes the first half of this thesis. The second half of this thesis is dedicated to the analysis of microelectrode recordings collected during Deep Brain Stimulation (DBS), a popular palliative treatment for advanced PD. Analysis of this second source of data centers on the problems of unsupervised detection and sorting of action potentials or "spikes" in recordings of multiple cell activity, providing valuable information on real time neural activity in the brain.
Resumo:
Bauxite refinery residues (red mud) are derived from the Bayer process by the digestion of crushed bauxite in concentrated sodium hydroxide at elevated temperatures and pressures. This slurry residue, if untreated, is unsuitable for discharge directly into the environment and is usually stored in tailing dams. The liquid portion has the potential for discharge, but requires pre-treatment before this can occur. The seawater neutralisation treatment facilitates a significant reduction in pH and dissolved metal concentrations, through the precipitation of hydrotalcite-like compounds and some other Mg, Ca, and Al hydroxide and carbonate minerals. The hydrotalcite-like compounds, precipitated during seawater neutralisation, also remove a range of transition metals, oxy-anions and other anionic species through a combination of intercalation and adsorption reactions: smaller anions are intercalated into the hydrotalcite matrix, while larger molecules are adsorbed on the particle surfaces. A phenomenon known as ‘reversion’ can occur if the seawater neutralisation process is not properly controlled. Reversion causes an increase in the pH and dissolved impurity levels of the neutralised effluent, rendering it unsuitable for discharge. It is believed that slow dissolution of components of the red mud residue and compounds formed during the neutralisation process are responsible for reversion. This investigation looked at characterising natural hydrotalcite (Mg6Al2(OH)16(CO3)∙4H2O) and ‘Bayer’ hydrotalcite (synthesised using the seawater neutralisation process) using a variety of techniques including X-ray diffraction, infrared and Raman spectroscopy, and thermogravimetric analysis. This investigation showed that Bayer hydrotalcite is comprised of a mixture of 3:1 and 4:1 hydrotalcite structures and exhibited similar chemical characteristic to the 4:1 synthetic hydrotalcite. Hydrotalcite formed from the seawater neutralisation of Bauxite refinery residues has been found not to cause reversion. Other components in red mud were investigated to determine the cause of reversion and this investigation found three components that contributed to reversion: 1) tricalcium aluminate, 2) hydrocalumite and 3) calcium hydroxide. Increasing the amount of magnesium in the neutralisation process has been found to be successful in reducing reversion.
Resumo:
Zeolite N, an EDI type framework structure with ideal chemical formula K12Al10Si10O40Cl2•5H2O, was produced from kaolin between 100oC and 200oC in a continuously stirred reactor using potassic and potassic+sodic liquors containing a range of anions. Reactions using liquors such as KOH, KOH + KX (where X = F, Cl, Br, I, NO3, NO2), K2X (where X=CO3), KOH + NaCl or NaOH + KCl were complete (>95% product) in less than two hours depending on the batch composition and temperature of reaction. With KOH and KCl in the reaction mixture and H2O/Al2O3~49, zeolite N was formed over a range of concentrations (1M < [KOH] < 18M) and reaction times (0.5h < t < 60h). At higher temperatures or higher KOH molarity, other potassic phases such as kalsilite or kaliophyllite formed. In general, temperature and KOH molarity defined the extent of zeolite N formation under these conditions. The introduction of sodic reagents to the starting mixture or use of one potassic reagent in the starting mixture reduced the stability field for zeolite N formation. Zeolite N was also formed using zeolite 4A as a source of Al and Si albeit for longer reaction times at a particular temperature when compared with kaolin as the source material.
Resumo:
This paper studies the missing covariate problem which is often encountered in survival analysis. Three covariate imputation methods are employed in the study, and the effectiveness of each method is evaluated within the hazard prediction framework. Data from a typical engineering asset is used in the case study. Covariate values in some time steps are deliberately discarded to generate an incomplete covariate set. It is found that although the mean imputation method is simpler than others for solving missing covariate problems, the results calculated by it can differ largely from the real values of the missing covariates. This study also shows that in general, results obtained from the regression method are more accurate than those of the mean imputation method but at the cost of a higher computational expensive. Gaussian Mixture Model (GMM) method is found to be the most effective method within these three in terms of both computation efficiency and predication accuracy.