992 resultados para Statistical Computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The need for a house rental model in Townsville, Australia is addressed. Models developed for predicting house rental levels are described. An analytical model is built upon a priori selected variables and parameters of rental levels. Regression models are generated to provide a comparison to the analytical model. Issues in model development and performance evaluation are discussed. A comparison of the models indicates that the analytical model performs better than the regression models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a new simulation methodology in order to obtain exact or approximate Bayesian inference for models for low-valued count time series data that have computationally demanding likelihood functions. The algorithm fits within the framework of particle Markov chain Monte Carlo (PMCMC) methods. The particle filter requires only model simulations and, in this regard, our approach has connections with approximate Bayesian computation (ABC). However, an advantage of using the PMCMC approach in this setting is that simulated data can be matched with data observed one-at-a-time, rather than attempting to match on the full dataset simultaneously or on a low-dimensional non-sufficient summary statistic, which is common practice in ABC. For low-valued count time series data we find that it is often computationally feasible to match simulated data with observed data exactly. Our particle filter maintains $N$ particles by repeating the simulation until $N+1$ exact matches are obtained. Our algorithm creates an unbiased estimate of the likelihood, resulting in exact posterior inferences when included in an MCMC algorithm. In cases where exact matching is computationally prohibitive, a tolerance is introduced as per ABC. A novel aspect of our approach is that we introduce auxiliary variables into our particle filter so that partially observed and/or non-Markovian models can be accommodated. We demonstrate that Bayesian model choice problems can be easily handled in this framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Anisotropic damage distribution and evolution have a profound effect on borehole stress concentrations. Damage evolution is an irreversible process that is not adequately described within classical equilibrium thermodynamics. Therefore, we propose a constitutive model, based on non-equilibrium thermodynamics, that accounts for anisotropic damage distribution, anisotropic damage threshold and anisotropic damage evolution. We implemented this constitutive model numerically, using the finite element method, to calculate stress–strain curves and borehole stresses. The resulting stress–strain curves are distinctively different from linear elastic-brittle and linear elastic-ideal plastic constitutive models and realistically model experimental responses of brittle rocks. We show that the onset of damage evolution leads to an inhomogeneous redistribution of material properties and stresses along the borehole wall. The classical linear elastic-brittle approach to borehole stability analysis systematically overestimates the stress concentrations on the borehole wall, because dissipative strain-softening is underestimated. The proposed damage mechanics approach explicitly models dissipative behaviour and leads to non-conservative mud window estimations. Furthermore, anisotropic rocks with preferential planes of failure, like shales, can be addressed with our model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vacuum circuit breaker (VCB) overvoltage failure and its catastrophic failures during shunt reactor switching have been analyzed through computer simulations for multiple reignitions with a statistical VCB model found in the literature. However, a systematic review (SR) that is related to the multiple reignitions with a statistical VCB model does not yet exist. Therefore, this paper aims to analyze and explore the multiple reignitions with a statistical VCB model. It examines the salient points, research gaps and limitations of the multiple reignition phenomenon to assist with future investigations following the SR search. Based on the SR results, seven issues and two approaches to enhance the current statistical VCB model are identified. These results will be useful as an input to improve the computer modeling accuracy as well as the development of a reignition switch model with point-on-wave controlled switching for condition monitoring

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Matched case–control research designs can be useful because matching can increase power due to reduced variability between subjects. However, inappropriate statistical analysis of matched data could result in a change in the strength of association between the dependent and independent variables or a change in the significance of the findings. We sought to ascertain whether matched case–control studies published in the nursing literature utilized appropriate statistical analyses. Of 41 articles identified that met the inclusion criteria, 31 (76%) used an inappropriate statistical test for comparing data derived from case subjects and their matched controls. In response to this finding, we developed an algorithm to support decision-making regarding statistical tests for matched case–control studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When a community already torn by an event such as a prolonged war, is then hit by a natural disaster, the negative impact of this subsequent disaster in the longer term can be extremely devastating. Natural disasters further damage already destabilised and demoralised communities, making it much harder for them to be resilient and recover. Communities often face enormous challenges during the immediate recovery and the subsequent long term reconstruction periods, mainly due to the lack of a viable community involvement process. In post-war settings, affected communities, including those internally displaced, are often conceived as being completely disabled and are hardly ever consulted when reconstruction projects are being instigated. This lack of community involvement often leads to poor project planning, decreased community support, and an unsustainable completed project. The impact of war, coupled with the tensions created by the uninhabitable and poor housing provision, often hinders the affected residents from integrating permanently into their home communities. This paper outlines a number of fundamental factors that act as barriers to community participation related to natural disasters in post-war settings. The paper is based on a statistical analysis of, and findings from, a questionnaire survey administered in early 2012 in Afghanistan.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A one-time program is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a one-time memory. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions. This observation raises the question, "what assumptions are required to achieve one-time programs for quantum circuits?" Our main result is that any quantum circuit can be compiled into a one-time program assuming only the same basic one-time memory devices used for classical circuits. Moreover, these quantum one-time programs achieve statistical universal composability (UC-security) against any malicious user. Our construction employs methods for computation on authenticated quantum data, and we present a new quantum authentication scheme called the trap scheme for this purpose. As a corollary, we establish UC-security of a recent protocol for delegated quantum computation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult owing to species biology and behavioural characteristics. The design of robust sampling programmes should be based on an underlying statistical distribution that is sufficiently flexible to capture variations in the spatial distribution of the target species. Results: Comparisons are made of the accuracy of four probability-of-detection sampling models - the negative binomial model,1 the Poisson model,1 the double logarithmic model2 and the compound model3 - for detection of insects over a broad range of insect densities. Although the double log and negative binomial models performed well under specific conditions, it is shown that, of the four models examined, the compound model performed the best over a broad range of insect spatial distributions and densities. In particular, this model predicted well the number of samples required when insect density was high and clumped within experimental storages. Conclusions: This paper reinforces the need for effective sampling programs designed to detect insects over a broad range of spatial distributions. The compound model is robust over a broad range of insect densities and leads to substantial improvement in detection probabilities within highly variable systems such as grain storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Operational modal analysis (OMA) is prevalent in modal identifi cation of civil structures. It asks for response measurements of the underlying structure under ambient loads. A valid OMA method requires the excitation be white noise in time and space. Although there are numerous applications of OMA in the literature, few have investigated the statistical distribution of a measurement and the infl uence of such randomness to modal identifi cation. This research has attempted modifi ed kurtosis to evaluate the statistical distribution of raw measurement data. In addition, a windowing strategy employing this index has been proposed to select quality datasets. In order to demonstrate how the data selection strategy works, the ambient vibration measurements of a laboratory bridge model and a real cable-stayed bridge have been respectively considered. The analysis incorporated with frequency domain decomposition (FDD) as the target OMA approach for modal identifi cation. The modal identifi cation results using the data segments with different randomness have been compared. The discrepancy in FDD spectra of the results indicates that, in order to fulfi l the assumption of an OMA method, special care shall be taken in processing a long vibration measurement data. The proposed data selection strategy is easy-to-apply and verifi ed effective in modal analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis explored the development of statistical methods to support the monitoring and improvement in quality of treatment delivered to patients undergoing coronary angioplasty procedures. To achieve this goal, a suite of outcome measures was identified to characterise performance of the service, statistical tools were developed to monitor the various indicators and measures to strengthen governance processes were implemented and validated. Although this work focused on pursuit of these aims in the context of a an angioplasty service located at a single clinical site, development of the tools and techniques was undertaken mindful of the potential application to other clinical specialties and a wider, potentially national, scope.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nitrous oxide emissions from soil are known to be spatially and temporally volatile. Reliable estimation of emissions over a given time and space depends on measuring with sufficient intensity but deciding on the number of measuring stations and the frequency of observation can be vexing. The question of low frequency manual observations providing comparable results to high frequency automated sampling also arises. Data collected from a replicated field experiment was intensively studied with the intention to give some statistically robust guidance on these issues. The experiment had nitrous oxide soil to air flux monitored within 10 m by 2.5 m plots by automated closed chambers under a 3 h average sampling interval and by manual static chambers under a three day average sampling interval over sixty days. Observed trends in flux over time by the static chambers were mostly within the auto chamber bounds of experimental error. Cumulated nitrous oxide emissions as measured by each system were also within error bounds. Under the temporal response pattern in this experiment, no significant loss of information was observed after culling the data to simulate results under various low frequency scenarios. Within the confines of this experiment observations from the manual chambers were not spatially correlated above distances of 1 m. Statistical power was therefore found to improve due to increased replicates per treatment or chambers per replicate. Careful after action review of experimental data can deliver savings for future work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis explored the knowledge and reasoning of young children in solving novel statistical problems, and the influence of problem context and design on their solutions. It found that young children's statistical competencies are underestimated, and that problem design and context facilitated children's application of a wide range of knowledge and reasoning skills, none of which had been taught. A qualitative design-based research method, informed by the Models and Modeling perspective (Lesh & Doerr, 2003) underpinned the study. Data modelling activities incorporating picture story books were used to contextualise the problems. Children applied real-world understanding to problem solving, including attribute identification, categorisation and classification skills. Intuitive and metarepresentational knowledge together with inductive and probabilistic reasoning was used to make sense of data, and beginning awareness of statistical variation and informal inference was visible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Indirect inference (II) is a methodology for estimating the parameters of an intractable (generative) model on the basis of an alternative parametric (auxiliary) model that is both analytically and computationally easier to deal with. Such an approach has been well explored in the classical literature but has received substantially less attention in the Bayesian paradigm. The purpose of this paper is to compare and contrast a collection of what we call parametric Bayesian indirect inference (pBII) methods. One class of pBII methods uses approximate Bayesian computation (referred to here as ABC II) where the summary statistic is formed on the basis of the auxiliary model, using ideas from II. Another approach proposed in the literature, referred to here as parametric Bayesian indirect likelihood (pBIL), we show to be a fundamentally different approach to ABC II. We devise new theoretical results for pBIL to give extra insights into its behaviour and also its differences with ABC II. Furthermore, we examine in more detail the assumptions required to use each pBII method. The results, insights and comparisons developed in this paper are illustrated on simple examples and two other substantive applications. The first of the substantive examples involves performing inference for complex quantile distributions based on simulated data while the second is for estimating the parameters of a trivariate stochastic process describing the evolution of macroparasites within a host based on real data. We create a novel framework called Bayesian indirect likelihood (BIL) which encompasses pBII as well as general ABC methods so that the connections between the methods can be established.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter argues for the need to restructure children’s statistical experiences from the beginning years of formal schooling. The ability to understand and apply statistical reasoning is paramount across all walks of life, as seen in the variety of graphs, tables, diagrams, and other data representations requiring interpretation. Young children are immersed in our data-driven society, with early access to computer technology and daily exposure to the mass media. With the rate of data proliferation have come increased calls for advancing children’s statistical reasoning abilities, commencing with the earliest years of schooling (e.g., Langrall et al. 2008; Lehrer and Schauble 2005; Shaughnessy 2010; Whitin and Whitin 2011). Several articles (e.g., Franklin and Garfield 2006; Langrall et al. 2008) and policy documents (e.g., National Council of Teachers ofMathematics 2006) have highlighted the need for a renewed focus on this component of early mathematics learning, with children working mathematically and scientifically in dealing with realworld data. One approach to this component in the beginning school years is through data modelling (English 2010; Lehrer and Romberg 1996; Lehrer and Schauble 2000, 2007)...