747 resultados para statistical framework
Resumo:
In the Bayesian framework a standard approach to model criticism is to compare some function of the observed data to a reference predictive distribution. The result of the comparison can be summarized in the form of a p-value, and it's well known that computation of some kinds of Bayesian predictive p-values can be challenging. The use of regression adjustment approximate Bayesian computation (ABC) methods is explored for this task. Two problems are considered. The first is the calibration of posterior predictive p-values so that they are uniformly distributed under some reference distribution for the data. Computation is difficult because the calibration process requires repeated approximation of the posterior for different data sets under the reference distribution. The second problem considered is approximation of distributions of prior predictive p-values for the purpose of choosing weakly informative priors in the case where the model checking statistic is expensive to compute. Here the computation is difficult because of the need to repeatedly sample from a prior predictive distribution for different values of a prior hyperparameter. In both these problems we argue that high accuracy in the computations is not required, which makes fast approximations such as regression adjustment ABC very useful. We illustrate our methods with several samples.
Resumo:
This paper offers an uncertainty quantification (UQ) study applied to the performance analysis of the ERCOFTAC conical diffuser. A deterministic CFD solver is coupled with a non-statistical generalised Polynomial Chaos(gPC)representation based on a pseudo-spectral projection method. Such approach has the advantage to not require any modification of the CFD code for the propagation of random disturbances in the aerodynamic field. The stochactic results highlihgt the importance of the inlet velocity uncertainties on the pressure recovery both alone and when coupled with a second uncertain variable. From a theoretical point of view, we investigate the possibility to build our gPC representation on arbitray grid, thus increasing the flexibility of the stochastic framework.
Resumo:
This thesis proposes three novel models which extend the statistical methodology for motor unit number estimation, a clinical neurology technique. Motor unit number estimation is important in the treatment of degenerative muscular diseases and, potentially, spinal injury. Additionally, a recent and untested statistic to enable statistical model choice is found to be a practical alternative for larger datasets. The existing methods for dose finding in dual-agent clinical trials are found to be suitable only for designs of modest dimensions. The model choice case-study is the first of its kind containing interesting results using so-called unit information prior distributions.
Resumo:
There is a wide range of potential study designs for intervention studies to decrease nosocomial infections in hospitals. The analysis is complex due to competing events, clustering, multiple timescales and time-dependent period and intervention variables. This review considers the popular pre-post quasi-experimental design and compares it with randomized designs. Randomization can be done in several ways: randomization of the cluster [intensive care unit (ICU) or hospital] in a parallel design; randomization of the sequence in a cross-over design; and randomization of the time of intervention in a stepped-wedge design. We introduce each design in the context of nosocomial infections and discuss the designs with respect to the following key points: bias, control for nonintervention factors, and generalizability. Statistical issues are discussed. A pre-post-intervention design is often the only choice that will be informative for a retrospective analysis of an outbreak setting. It can be seen as a pilot study with further, more rigorous designs needed to establish causality. To yield internally valid results, randomization is needed. Generally, the first choice in terms of the internal validity should be a parallel cluster randomized trial. However, generalizability might be stronger in a stepped-wedge design because a wider range of ICU clinicians may be convinced to participate, especially if there are pilot studies with promising results. For analysis, the use of extended competing risk models is recommended.
Resumo:
AIM This paper presents a discussion on the application of a capability framework for advanced practice nursing standards/competencies. BACKGROUND There is acceptance that competencies are useful and necessary for definition and education of practice-based professions. Competencies have been described as appropriate for practice in stable environments with familiar problems. Increasingly competencies are being designed for use in the health sector for advanced practice such as the nurse practitioner role. Nurse practitioners work in environments and roles that are dynamic and unpredictable necessitating attributes and skills to practice at advanced and extended levels in both familiar and unfamiliar clinical situations. Capability has been described as the combination of skills, knowledge, values and self-esteem which enables individuals to manage change, be flexible and move beyond competency. DESIGN A discussion paper exploring 'capability' as a framework for advanced nursing practice standards. DATA SOURCES Data were sourced from electronic databases as described in the background section. IMPLICATIONS FOR NURSING As advanced practice nursing becomes more established and formalized, novel ways of teaching and assessing the practice of experienced clinicians beyond competency are imperative for the changing context of health services. CONCLUSION Leading researchers into capability in health care state that traditional education and training in health disciplines concentrates mainly on developing competence. To ensure that healthcare delivery keeps pace with increasing demand and a continuously changing context there is a need to embrace capability as a framework for advanced practice and education.
Resumo:
Speech recognition in car environments has been identified as a valuable means for reducing driver distraction when operating noncritical in-car systems. Under such conditions, however, speech recognition accuracy degrades significantly, and techniques such as speech enhancement are required to improve these accuracies. Likelihood-maximizing (LIMA) frameworks optimize speech enhancement algorithms based on recognized state sequences rather than traditional signal-level criteria such as maximizing signal-to-noise ratio. LIMA frameworks typically require calibration utterances to generate optimized enhancement parameters that are used for all subsequent utterances. Under such a scheme, suboptimal recognition performance occurs in noise conditions that are significantly different from that present during the calibration session – a serious problem in rapidly changing noise environments out on the open road. In this chapter, we propose a dialog-based design that allows regular optimization iterations in order to track the ever-changing noise conditions. Experiments using Mel-filterbank noise subtraction (MFNS) are performed to determine the optimization requirements for vehicular environments and show that minimal optimization is required to improve speech recognition, avoid over-optimization, and ultimately assist with semireal-time operation. It is also shown that the proposed design is able to provide improved recognition performance over frameworks incorporating a calibration session only.
Resumo:
This thesis examines the existing frameworks for energy management in the brewing industry and details the design, development and implementation of a new framework at a modern brewery. The aim of the research was to develop an energy management framework to identify opportunities in a systematic manner using Systems Engineering concepts and principles. This work led to a Sustainable Energy Management Framework, SEMF. Using the SEMF approach, one of Australia's largest breweries has achieved number 1 ranking in the world for water use for the production of beer and has also improved KPI's and sustained the energy management improvements that have been implemented during the past 15 years. The framework can be adapted to other manufacturing industries in the Australian context and is considered to be a new concept and a potentially important tool for energy management.
Resumo:
This paper addresses research from a three-year longitudinal study that engaged children in data modeling experiences from the beginning school year through to third year (6-8 years). A data modeling approach to statistical development differs in several ways from what is typically done in early classroom experiences with data. In particular, data modeling immerses children in problems that evolve from their own questions and reasoning, with core statistical foundations established early. These foundations include a focus on posing and refining statistical questions within and across contexts, structuring and representing data, making informal inferences, and developing conceptual, representational, and metarepresentational competence. Examples are presented of how young learners developed and sustained informal inferential reasoning and metarepresentational competence across the study to become “sophisticated statisticians”.