117 resultados para Quasi-Likelihood
Resumo:
The ability of the technique of large-amplitude Fourier transformed (FT) ac voltammetry to facilitate the quantitative evaluation of electrode processes involving electron transfer and catalytically coupled chemical reactions has been evaluated. Predictions derived on the basis of detailed simulations imply that the rate of electron transfer is crucial, as confirmed by studies on the ferrocenemethanol (FcMeOH)-mediated electrocatalytic oxidation of ascorbic acid. Thus, at glassy carbon, gold, and boron-doped diamond electrodes, the introduction of the coupled electrocatalytic reaction, while producing significantly enhanced dc currents, does not affect the ac harmonics. This outcome is as expected if the FcMeOH (0/+) process remains fully reversible in the presence of ascorbic acid. In contrast, the ac harmonic components available from FT-ac voltammetry are predicted to be highly sensitive to the homogeneous kinetics when an electrocatalytic reaction is coupled to a quasi-reversible electron-transfer process. The required quasi-reversible scenario is available at an indium tin oxide electrode. Consequently, reversible potential, heterogeneous charge-transfer rate constant, and charge-transfer coefficient values of 0.19 V vs Ag/AgCl, 0.006 cm s (-1) and 0.55, respectively, along with a second-order homogeneous chemical rate constant of 2500 M (-1) s (-1) for the rate-determining step in the catalytic reaction were determined by comparison of simulated responses and experimental voltammograms derived from the dc and first to fourth ac harmonic components generated at an indium tin oxide electrode. The theoretical concepts derived for large-amplitude FT ac voltammetry are believed to be applicable to a wide range of important solution-based mediated electrocatalytic reactions.
Resumo:
The method of generalized estimating equations (GEE) is a popular tool for analysing longitudinal (panel) data. Often, the covariates collected are time-dependent in nature, for example, age, relapse status, monthly income. When using GEE to analyse longitudinal data with time-dependent covariates, crucial assumptions about the covariates are necessary for valid inferences to be drawn. When those assumptions do not hold or cannot be verified, Pepe and Anderson (1994, Communications in Statistics, Simulations and Computation 23, 939–951) advocated using an independence working correlation assumption in the GEE model as a robust approach. However, using GEE with the independence correlation assumption may lead to significant efficiency loss (Fitzmaurice, 1995, Biometrics 51, 309–317). In this article, we propose a method that extracts additional information from the estimating equations that are excluded by the independence assumption. The method always includes the estimating equations under the independence assumption and the contribution from the remaining estimating equations is weighted according to the likelihood of each equation being a consistent estimating equation and the information it carries. We apply the method to a longitudinal study of the health of a group of Filipino children.
Resumo:
This study used automated data processing techniques to calculate a set of novel treatment plan accuracy metrics, and investigate their usefulness as predictors of quality assurance (QA) success and failure. 151 beams from 23 prostate and cranial IMRT treatment plans were used in this study. These plans had been evaluated before treatment using measurements with a diode array system. The TADA software suite was adapted to allow automatic batch calculation of several proposed plan accuracy metrics, including mean field area, small-aperture, off-axis and closed-leaf factors. All of these results were compared the gamma pass rates from the QA measurements and correlations were investigated. The mean field area factor provided a threshold field size (5 cm2, equivalent to a 2.2 x 2.2 cm2 square field), below which all beams failed the QA tests. The small aperture score provided a useful predictor of plan failure, when averaged over all beams, despite being weakly correlated with gamma pass rates for individual beams. By contrast, the closed leaf and off-axis factors provided information about the geometric arrangement of the beam segments but were not useful for distinguishing between plans that passed and failed QA. This study has provided some simple tests for plan accuracy, which may help minimise time spent on QA assessments of treatments that are unlikely to pass.
Resumo:
We investigate the utility to computational Bayesian analyses of a particular family of recursive marginal likelihood estimators characterized by the (equivalent) algorithms known as "biased sampling" or "reverse logistic regression" in the statistics literature and "the density of states" in physics. Through a pair of numerical examples (including mixture modeling of the well-known galaxy dataset) we highlight the remarkable diversity of sampling schemes amenable to such recursive normalization, as well as the notable efficiency of the resulting pseudo-mixture distributions for gauging prior-sensitivity in the Bayesian model selection context. Our key theoretical contributions are to introduce a novel heuristic ("thermodynamic integration via importance sampling") for qualifying the role of the bridging sequence in this procedure, and to reveal various connections between these recursive estimators and the nested sampling technique.
Resumo:
The aim of the current study was to examine the associations between a number of individual factors (demographic factors (age and gender), personality factors, risk-taking propensity, attitudes towards drink driving, and perceived legitimacy of drink driving enforcement) and how they influence the self-reported likelihood of drink driving. The second aim of this study was to examine the potential of attitudes mediating the relationship between risk-taking and self-reported likelihood of drink driving. In total, 293 Queensland drivers volunteered to participate in an online survey that assessed their self-reported likelihood to drink drive in the next month, demographics, traffic-related demographics, personality factors, risk-taking propensity, attitudes towards drink driving, and perceived legitimacy of drink driving enforcement. An ordered logistic regression analysis was utilised to evaluate the first aim of the study; at the first step the demographic variables were entered; at step two the personality and risk-taking were entered; at the third step, the attitudes and perceptions of legitimacy variables were entered. Being a younger driver and having a high risk-taking propensity were related to self-reported likelihood of drink driving. However, when the attitudes variable was entered, these individual factors were no longer significant; with attitudes being the most important predictor of self-reported drink driving likelihood. A significant mediation model was found with the second aim of the study, such that attitudes mediated the relationship between risk-taking and self-reported likelihood of drink driving. Considerable effort and resources are utilised by traffic authorities to reducing drink driving on the Australian road network. Notwithstanding these efforts, some participants still had some positive attitudes towards drink driving and reported that they were likely to drink drive in the future. These findings suggest that more work is needed to address attitudes regarding the dangerousness of drink driving.
Resumo:
Wastewater containing human sewage is often discharged with little or no treatment into the Antarctic marine environment. Faecal sterols (primarily coprostanol) in sediments have been used for assessment of human sewage contamination in this environment, but in situ production and indigenous faunal inputs can confound such determinations. Using gas chromatography with mass spectral detection profiles of both C27 and C29 sterols, potential sources of faecal sterols were examined in nearshore marine sediments, encompassing sites proximal and distal to the wastewater outfall at Davis Station. Faeces from indigenous seals and penguins were also examined. Faeces from several indigenous species contained significant quantities of coprostanol but not 24-ethylcoprostanol, which is present in human faeces. In situ coprostanol and 24-ethylcoprostanol production was identified by co-production of their respective epi isomers at sites remote from the wastewat er source and in high total organic matter sediments. A C 29 sterols-based polyphasic likelihood assessment matrix for human sewage contamination is presented, which distinguishes human from local fauna faecal inputs and in situ production in the Antarctic environment. Sewage contamination was detected up to 1.5 km from Davis Station.
Resumo:
A computationally efficient sequential Monte Carlo algorithm is proposed for the sequential design of experiments for the collection of block data described by mixed effects models. The difficulty in applying a sequential Monte Carlo algorithm in such settings is the need to evaluate the observed data likelihood, which is typically intractable for all but linear Gaussian models. To overcome this difficulty, we propose to unbiasedly estimate the likelihood, and perform inference and make decisions based on an exact-approximate algorithm. Two estimates are proposed: using Quasi Monte Carlo methods and using the Laplace approximation with importance sampling. Both of these approaches can be computationally expensive, so we propose exploiting parallel computational architectures to ensure designs can be derived in a timely manner. We also extend our approach to allow for model uncertainty. This research is motivated by important pharmacological studies related to the treatment of critically ill patients.
Resumo:
This project developed, validated and tested reliability of a risk assessment tool to predict the risk of failure to heal of patients with venous leg ulcers within 24 weeks. The risk assessment tool will allow clinicians to be able to determine realistic outcomes for their patients, promote early healing and potentially avoid weeks of inappropriate therapy. The tool will also assist in addressing specific risk factors and guide decisions on early, alternative, tailored interventions.
Resumo:
Purpose: The purpose of this paper is to review, critique and develop a research agenda for the Elaboration Likelihood Model (ELM). The model was introduced by Petty and Cacioppo over three decades ago and has been modified, revised and extended. Given modern communication contexts, it is appropriate to question the model’s validity and relevance. Design/methodology/approach: The authors develop a conceptual approach, based on a fully comprehensive and extensive review and critique of ELM and its development since its inception. Findings: This paper focuses on major issues concerning the ELM. These include model assumptions and its descriptive nature; continuum questions, multi-channel processing and mediating variables before turning to the need to replicate the ELM and to offer recommendations for its future development. Research limitations/implications: This paper offers a series of questions in terms of research implications. These include whether ELM could or should be replicated, its extension, a greater conceptualization of argument quality, an explanation of movement along the continuum and between central and peripheral routes to persuasion, or to use new methodologies and technologies to help better understanding consume thinking and behaviour? All these relate to the current need to explore the relevance of ELM in a more modern context. Practical implications: It is time to question the validity and relevance of the ELM. The diversity of on- and off-line media options and the variants of consumer choice raise significant issues. Originality/value: While the ELM model continues to be widely cited and taught as one of the major cornerstones of persuasion, questions are raised concerning its relevance and validity in 21st century communication contexts.
Resumo:
Speech recognition in car environments has been identified as a valuable means for reducing driver distraction when operating noncritical in-car systems. Under such conditions, however, speech recognition accuracy degrades significantly, and techniques such as speech enhancement are required to improve these accuracies. Likelihood-maximizing (LIMA) frameworks optimize speech enhancement algorithms based on recognized state sequences rather than traditional signal-level criteria such as maximizing signal-to-noise ratio. LIMA frameworks typically require calibration utterances to generate optimized enhancement parameters that are used for all subsequent utterances. Under such a scheme, suboptimal recognition performance occurs in noise conditions that are significantly different from that present during the calibration session – a serious problem in rapidly changing noise environments out on the open road. In this chapter, we propose a dialog-based design that allows regular optimization iterations in order to track the ever-changing noise conditions. Experiments using Mel-filterbank noise subtraction (MFNS) are performed to determine the optimization requirements for vehicular environments and show that minimal optimization is required to improve speech recognition, avoid over-optimization, and ultimately assist with semireal-time operation. It is also shown that the proposed design is able to provide improved recognition performance over frameworks incorporating a calibration session only.
Resumo:
This article analyses the effects of NGO microfinance programmes on household welfare in Vietnam. Data on 470 households across 25 villages were collected using a quasi-experimental survey approach to overcome any self-selection bias. The sample was designed so that member households of microfinance programmes were compared with non-member households with similar characteristics. The analysis shows no significant effects of participation in NGO microfinance on household welfare, proxied by income and consumption per adult equivalent.