117 resultados para Quasi-Likelihood


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes the use of eigenvoice modeling techniques with the Cross Likelihood Ratio (CLR) as a criterion for speaker clustering within a speaker diarization system. The CLR has previously been shown to be a robust decision criterion for speaker clustering using Gaussian Mixture Models. Recently, eigenvoice modeling techniques have become increasingly popular, due to its ability to adequately represent a speaker based on sparse training data, as well as an improved capture of differences in speaker characteristics. This paper hence proposes that it would be beneficial to capitalize on the advantages of eigenvoice modeling in a CLR framework. Results obtained on the 2002 Rich Transcription (RT-02) Evaluation dataset show an improved clustering performance, resulting in a 35.1% relative improvement in the overall Diarization Error Rate (DER) compared to the baseline system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

...the probabilistic computer simulation study by Dunham and colleagues evaluating the impact of different cervical spine management (CSM) strategies on tetraplegia and brain injury outcomes.1 Based on literature findings, expert opinion and with use of advances programming techniques the authors conclude that early collar removal without cervical spine magnetic resonance imaging (MRI) is a preferable CSM strategy for comatose, blunt trauma patients with extremity movement and a negative cervical spine computed tomography(CT) scan. Although we do not have the required expertise to comment on the applied statistical approach, we would like to comment on one of the medical assumptions raised by the authors, namely the likelihood of tetraplegia in this specific population....

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Finite Element Modeling (FEM) has become a vital tool in the automotive design and development processes. FEM of the human body is a technique capable of estimating parameters that are difficult to measure in experimental studies with the human body segments being modeled as complex and dynamic entities. Several studies have been dedicated to attain close-to-real FEMs of the human body (Pankoke and Siefert 2007; Amann, Huschenbeth et al. 2009; ESI 2010). The aim of this paper is to identify and appraise the state of-the art models of the human body which incorporate detailed pelvis and/or lower extremity models. Six databases and search engines were used to obtain literature, and the search was limited to studies published in English since 2000. The initial search results identified 636 pelvis-related papers, 834 buttocks-related papers, 505 thigh-related papers, 927 femur-related papers, 2039 knee-related papers, 655 shank-related papers, 292 tibia-related papers, 110 fibula-related papers, 644 ankle related papers, and 5660 foot-related papers. A refined search returned 100 pelvis-related papers, 45 buttocks related papers, 65 thigh-related papers, 162 femur-related papers, 195 kneerelated papers, 37 shank-related papers, 80 tibia-related papers, 30 fibula-related papers and 102 ankle-related papers and 246 foot-related papers. The refined literature list was further restricted by appraisal against a modified LOW appraisal criteria. Studies with unclear methodologies, with a focus on populations with pathology or with sport related dynamic motion modeling were excluded. The final literature list included fifteen models and each was assessed against the percentile the model represents, the gender the model was based on, the human body segment/segments included in the model, the sample size used to develop the model, the source of geometric/anthropometric values used to develop the model, the posture the model represents and the finite element solver used for the model. The results of this literature review provide indication of bias in the available models towards 50th percentile male modeling with a notable concentration on the pelvis, femur and buttocks segments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recessions impact the retail sector and as such research into consumer decision making during such times is imperative. In response to this, our study takes an innovative approach to examining how the perceived importance of retail store categories in a shopping mall influences the relationship between consumers' shopping attitudes and likelihood of purchasing in those categories during a recession. The overall findings show the importance of a product category to a consumer, which is often overlooked, has a strong explanatory influence on consumer purchase intentions for that specific retail store categories in a shopping mall under recession conditions. Findings also show that for consumers’ who have altered their shopping behaviour the perceived importance of a retail store category fully mediates the relationship for: Majors, Leisure, Food Catered and Mini Majors categories, and partial mediation for Apparel. Importance has no mediating effect for: Food Retail, General Retail, Mobile Phone Services, Home wares, and Retail Services. Our study makes a key contribution to the retail management literature with the findings suggesting that redefining and articulating the importance of the value offering for specific retail store categories can help reduce the impact of changes in consumers' recessionary shopping intentions across the mall tenant mix. Such actions can then help preserve the image of the shopping mall in the minds of the consumers when the economic recovery begins.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Australian research on Indigenous sentencing disparities of the standard of international work is somewhat recent. Contrary to expectations based on international research, Australian studies generally have not found Indigenous offenders to be treated substantively more harshly than non-Indigenous offenders in similar circumstances. However, this research has primarily focused on adult higher courts, with little attention to lower courts and children’s courts. In this article, we examine whether Indigeneity has a direct impact on the judicial decision to incarcerate for three courts (adult higher, adult lower, children’s higher court) in Queensland. We found no significant differences in the likelihood of a sentence of incarceration in the higher courts (adult and children’s). In contrast, in the lower courts, Indigenous defendants were more likely to be imprisoned than non-Indigenous defendants when sentenced under statistically similar circumstances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel technique for segmenting an audio stream into homogeneous regions according to speaker identities, background noise, music, environmental and channel conditions. Audio segmentation is useful in audio diarization systems, which aim to annotate an input audio stream with information that attributes temporal regions of the audio into their specific sources. The segmentation method introduced in this paper is performed using the Generalized Likelihood Ratio (GLR), computed between two adjacent sliding windows over preprocessed speech. This approach is inspired by the popular segmentation method proposed by the pioneering work of Chen and Gopalakrishnan, using the Bayesian Information Criterion (BIC) with an expanding search window. This paper will aim to identify and address the shortcomings associated with such an approach. The result obtained by the proposed segmentation strategy is evaluated on the 2002 Rich Transcription (RT-02) Evaluation dataset, and a miss rate of 19.47% and a false alarm rate of 16.94% is achieved at the optimal threshold.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes the use of Bayesian approaches with the cross likelihood ratio (CLR) as a criterion for speaker clustering within a speaker diarization system, using eigenvoice modeling techniques. The CLR has previously been shown to be an effective decision criterion for speaker clustering using Gaussian mixture models. Recently, eigenvoice modeling has become an increasingly popular technique, due to its ability to adequately represent a speaker based on sparse training data, as well as to provide an improved capture of differences in speaker characteristics. The integration of eigenvoice modeling into the CLR framework to capitalize on the advantage of both techniques has also been shown to be beneficial for the speaker clustering task. Building on that success, this paper proposes the use of Bayesian methods to compute the conditional probabilities in computing the CLR, thus effectively combining the eigenvoice-CLR framework with the advantages of a Bayesian approach to the diarization problem. Results obtained on the 2002 Rich Transcription (RT-02) Evaluation dataset show an improved clustering performance, resulting in a 33.5% relative improvement in the overall Diarization Error Rate (DER) compared to the baseline system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim. A protocol for a new peer-led self-management programme for communitydwelling older people with diabetes in Shanghai, China. Background. The increasing prevalence of type 2 diabetes poses major public health challenges. Appropriate education programmes could help people with diabetes to achieve self-management and better health outcomes. Providing education programmes to the fast growing number of people with diabetes present a real challenge to Chinese healthcare system, which is strained for personnel and funding shortages. Empirical literature and expert opinions suggest that peer education programmes are promising. Design. Quasi-experimental. Methods. This study is a non-equivalent control group design (protocol approved in January, 2008). A total of 190 people, with 95 participants in each group, will be recruited from two different, but similar, communities. The programme, based on Social Cognitive Theory, will consist of basic diabetes instruction and social support and self-efficacy enhancing group activities. Basic diabetes instruction sessions will be delivered by health professionals, whereas social support and self-efficacy enhancing group activities will be led by peer leaders. Outcome variables include: self-efficacy, social support, self-management behaviours, depressive status, quality of life and healthcare utilization, which will be measured at baseline, 4 and 12 weeks. Discussion. This theory-based programme tailored to Chinese patients has potential for improving diabetes self-management and subsequent health outcomes. In addition, the delivery mode, through involvement of peer leaders and existing community networks,is especially promising considering healthcare resource shortage in China.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter contains sections titled: Introduction Case study: Estimating transmission rates of nosocomial pathogens Models and methods Data analysis and results Discussion References

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Approximate Bayesian computation has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the approximate Bayesian computation parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The Bayesian computation with empirical likelihood algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The positive relationship between speed and crash risk and severity is robust and well-established. While excessive speeding is typically regarded by the public as a common contributing factor in road crashes, speeding remains a common traffic infringement and an arguably socially acceptable behaviour, particularly at low levels over the speed limit. This suggests that other factors potentially contribute to this disparity between crash perceptions and actual behaviours. Previous work has described associations between perceptions of the legitimacy of speed enforcement, attitudes, and how they relate to the likelihood of speeding. This study sought to more closely examine the nature of the relationships between these variables. In total, 293 Queensland drivers participated in a study that examined how demographics, personality variables, attitudes, and perceptions of the legitimacy of enforcement contributed to drivers’ self-reported likelihood of speeding. Results suggested that positive attitudes towards speeding had the greatest impact on likelihood of speeding behaviours. Being younger and higher levels of the personality trait of extraversion were also associated with greater levels of self-reported likelihood of speeding. Attitudes were found to mediate the relationship between perceived legitimacy of speed enforcement and self-reported likelihood of speeding. A subgroup analysis of participants with positive and negative attitudes towards speeding revealed that a differential set of variables were predictive of self-reported likelihood of speeding for the two subgroups. This highlights the potential importance of attitudes in understanding the influence of perceptions of legitimacy of speed enforcement on speeding behaviour, and the need for targeted rather than a ‘one size fits all’ approach to changing attitudes and ultimately behaviour. The findings of the current study help to further understand why some drivers continue to speed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This book provides a general framework for specifying, estimating, and testing time series econometric models. Special emphasis is given to estimation by maximum likelihood, but other methods are also discussed, including quasi-maximum likelihood estimation, generalized method of moments estimation, nonparametric estimation, and estimation by simulation. An important advantage of adopting the principle of maximum likelihood as the unifying framework for the book is that many of the estimators and test statistics proposed in econometrics can be derived within a likelihood framework, thereby providing a coherent vehicle for understanding their properties and interrelationships. In contrast to many existing econometric textbooks, which deal mainly with the theoretical properties of estimators and test statistics through a theorem-proof presentation, this book squarely addresses implementation to provide direct conduits between the theory and applied work.