972 resultados para Laplace-Metropolis estimator
Resumo:
This paper considers the applicability of the least mean fourth (LM F) power gradient adaptation criteria with 'advantage' for signals associated with gaussian noise, the associated noise power estimate not being known. The proposed method, as an adaptive spectral estimator, is found to provide superior performance than the least mean square (LMS) adaptation for the same (or even lower) speed of convergence for signals having sufficiently high signal-to-gaussian noise ratio. The results include comparison of the performance of the LMS-tapped delay line, LMF-tapped delay line, LMS-lattice and LMF-lattice algorithms, with the Burg's block data method as reference. The signals, like sinusoids with noise and stochastic signals like EEG, are considered in this study.
Resumo:
Background: Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots) to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error and bias of these estimators for the various spatial patterns found in nature have been examined using simulated populations only. In this study we investigated eight plotless density estimators to determine which were robust across a wide range of data sets from fully mapped field sites. They covered a wide range of situations including animal damage to rice and corn, nest locations, active rodent burrows and distribution of plants. Monte Carlo simulations were applied to sample the data sets, and in all cases the error of the estimate (measured as relative root mean square error) was reduced with increasing sample size. The method of calculation and ease of use in the field were also used to judge the usefulness of the estimator. Estimators were evaluated in their original published forms, although the variable area transect (VAT) and ordered distance methods have been the subjects of optimization studies. Results: An estimator that was a compound of three basic distance estimators was found to be robust across all spatial patterns for sample sizes of 25 or greater. The same field methodology can be used either with the basic distance formula or the formula used with the Kendall-Moran estimator in which case a reduction in error may be gained for sample sizes less than 25, however, there is no improvement for larger sample sizes. The variable area transect (VAT) method performed moderately well, is easy to use in the field, and its calculations easy to undertake. Conclusion: Plotless density estimators can provide an estimate of density in situations where it would not be practical to layout a plot or quadrat and can in many cases reduce the workload in the field.
Resumo:
Purpose – Preliminary cost estimates for construction projects are often the basis of financial feasibility and budgeting decisions in the early stages of planning and for effective project control, monitoring and execution. The purpose of this paper is to identify and better understand the cost drivers and factors that contribute to the accuracy of estimates in residential construction projects from the developers’ perspective. Design/methodology/approach – The paper uses a literature review to determine the drivers that affect the accuracy of developers’ early stage cost estimates and the factors influencing the construction costs of residential construction projects. It used cost variance data and other supporting documentation collected from two case study projects in South East Queensland, Australia, along with semi-structured interviews conducted with the practitioners involved. Findings – It is found that many cost drivers or factors of cost uncertainty identified in the literature for large-scale projects are not as apparent and relevant for developers’ small-scale residential construction projects. Specifically, the certainty and completeness of project-specific information, suitability of historical cost data, contingency allowances, methods of estimating and the estimator’s level of experience significantly affect the accuracy of cost estimates. Developers of small-scale residential projects use pre-established and suitably priced bills of quantities as the prime estimating method, which is considered to be the most efficient and accurate method for standard house designs. However, this method needs to be backed with the expertise and experience of the estimator. Originality/value – There is a lack of research on the accuracy of developers’ early stage cost estimates and the relevance and applicability of cost drivers and factors in the residential construction projects. This research has practical significance for improving the accuracy of such preliminary cost estimates.
Resumo:
The Metropolis algorithm has been generalized to allow for the variation of shape and size of the MC cell. A calculation using different potentials illustrates how the generalized method can be used for the study of crystal structure transformations. A restricted MC integration in the nine dimensional space of the cell components also leads to the stable structure for the Lennard-Jones potential.
Resumo:
This dissertation examines the short- and long-run impacts of timber prices and other factors affecting NIPF owners' timber harvesting and timber stocking decisions. The utility-based Faustmann model provides testable hypotheses of the exogenous variables retained in the timber supply analysis. The timber stock function, derived from a two-period biomass harvesting model, is estimated using a two-step GMM estimator based on balanced panel data from 1983 to 1991. Timber supply functions are estimated using a Tobit model adjusted for heteroscedasticity and nonnormality of errors based on panel data from 1994 to 1998. Results show that if specification analysis of the Tobit model is ignored, inconsistency and biasedness can have a marked effect on parameter estimates. The empirical results show that owner's age is the single most important factor determining timber stock; timber price is the single most important factor in harvesting decision. The results of the timber supply estimations can be interpreted using utility-based Faustmann model of a forest owner who values a growing timber in situ.
Resumo:
A residual-based strategy to estimate the local truncation error in a finite volume framework for steady compressible flows is proposed. This estimator, referred to as the -parameter, is derived from the imbalance arising from the use of an exact operator on the numerical solution for conservation laws. The behaviour of the residual estimator for linear and non-linear hyperbolic problems is systematically analysed. The relationship of the residual to the global error is also studied. The -parameter is used to derive a target length scale and consequently devise a suitable criterion for refinement/derefinement. This strategy, devoid of any user-defined parameters, is validated using two standard test cases involving smooth flows. A hybrid adaptive strategy based on both the error indicators and the -parameter, for flows involving shocks is also developed. Numerical studies on several compressible flow cases show that the adaptive algorithm performs excellently well in both two and three dimensions.
Resumo:
We use Bayesian model selection techniques to test extensions of the standard flat LambdaCDM paradigm. Dark-energy and curvature scenarios, and primordial perturbation models are considered. To that end, we calculate the Bayesian evidence in favour of each model using Population Monte Carlo (PMC), a new adaptive sampling technique which was recently applied in a cosmological context. The Bayesian evidence is immediately available from the PMC sample used for parameter estimation without further computational effort, and it comes with an associated error evaluation. Besides, it provides an unbiased estimator of the evidence after any fixed number of iterations and it is naturally parallelizable, in contrast with MCMC and nested sampling methods. By comparison with analytical predictions for simulated data, we show that our results obtained with PMC are reliable and robust. The variability in the evidence evaluation and the stability for various cases are estimated both from simulations and from data. For the cases we consider, the log-evidence is calculated with a precision of better than 0.08. Using a combined set of recent CMB, SNIa and BAO data, we find inconclusive evidence between flat LambdaCDM and simple dark-energy models. A curved Universe is moderately to strongly disfavoured with respect to a flat cosmology. Using physically well-motivated priors within the slow-roll approximation of inflation, we find a weak preference for a running spectral index. A Harrison-Zel'dovich spectrum is weakly disfavoured. With the current data, tensor modes are not detected; the large prior volume on the tensor-to-scalar ratio r results in moderate evidence in favour of r=0.
Resumo:
Having the ability to work with complex models can be highly beneficial, but the computational cost of doing so is often large. Complex models often have intractable likelihoods, so methods that directly use the likelihood function are infeasible. In these situations, the benefits of working with likelihood-free methods become apparent. Likelihood-free methods, such as parametric Bayesian indirect likelihood that uses the likelihood of an alternative parametric auxiliary model, have been explored throughout the literature as a good alternative when the model of interest is complex. One of these methods is called the synthetic likelihood (SL), which assumes a multivariate normal approximation to the likelihood of a summary statistic of interest. This paper explores the accuracy and computational efficiency of the Bayesian version of the synthetic likelihood (BSL) approach in comparison to a competitor known as approximate Bayesian computation (ABC) and its sensitivity to its tuning parameters and assumptions. We relate BSL to pseudo-marginal methods and propose to use an alternative SL that uses an unbiased estimator of the exact working normal likelihood when the summary statistic has a multivariate normal distribution. Several applications of varying complexity are considered to illustrate the findings of this paper.
Resumo:
Analytical solution of a 2-dimensional problem of solidification of a superheated liquid in a semi-infinite mould has been studied in this paper. On the boundary, the prescribed temperature is such that the solidification starts simultaneously at all points of the boundary. Results are also given for the 2-dimensional ablation problem. The solution of the heat conduction equation has been obtained in terms of multiple Laplace integrals involving suitable unknown fictitious initial temperatures. These fictitious initial temperatures have interesting physical interpretations. By choosing suitable series expansions for fictitious initial temperatures and moving interface boundary, the unknown quantities can be determined. Solidification thickness has been calculated for short time and effect of parameters on the solidification thickness has been shown with the help of graphs.
Time-dependent flows of rotating and stratified fluids in geometries with non-uniform cross-sections
Resumo:
Unsteady rotating and stratified flows in geometries with non-uniform cross-sections are investigated under Oseen approximation using Laplace transform technique. The solutions are obtained in closed form and they reveal that the flow remains oscillatory even after infinitely large time. The existence of inertial waves propagating in both positive and negative directions of the flow is observed. When the Rossby or Froude number is close to a certain infinite set of critical values the blocking and back flow occur and the flow pattern becomes more and more complicated with increasing number of stagnant zones when each critical value is crossed. The analogy that is observed in the solutions for rotating and stratified flows is also discussed.
Resumo:
Fan forced injection of phosphine gas fumigant into stored grain is a common method to treat infestation by insects. For low injection velocities the transport of fumigant can be modelled as Darcy flow in a porous medium where the gas pressure satisfies Laplace's equation. Using this approach, a closed form series solution is derived for the pressure, velocity and streamlines in a cylindrically stored grain bed with either a circular or annular inlet, from which traverse times are numerically computed. A leading order closed form expression for the traverse time is also obtained and found to be reasonable for inlet configurations close to the central axis of the grain storage. Results are interpreted for the case of a representative 6m high farm wheat store, where the time to advect the phosphine to almost the entire grain bed is found to be approximately one hour.
Resumo:
Cyclostationary analysis has proven effective in identifying signal components for diagnostic purposes. A key descriptor in this framework is the cyclic power spectrum, traditionally estimated by the averaged cyclic periodogram and the smoothed cyclic periodogram. A lengthy debate about the best estimator finally found a solution in a cornerstone work by Antoni, who proposed a unified form for the two families, thus allowing a detailed statistical study of their properties. Since then, the focus of cyclostationary research has shifted towards algorithms, in terms of computational efficiency and simplicity of implementation. Traditional algorithms have proven computationally inefficient and the sophisticated "cyclostationary" definition of these estimators slowed their spread in the industry. The only attempt to increase the computational efficiency of cyclostationary estimators is represented by the cyclic modulation spectrum. This indicator exploits the relationship between cyclostationarity and envelope analysis. The link with envelope analysis allows a leap in computational efficiency and provides a "way in" for the understanding by industrial engineers. However, the new estimator lies outside the unified form described above and an unbiased version of the indicator has not been proposed. This paper will therefore extend the analysis of envelope-based estimators of the cyclic spectrum, proposing a new approach to include them in the unified form of cyclostationary estimators. This will enable the definition of a new envelope-based algorithm and the detailed analysis of the properties of the cyclic modulation spectrum. The computational efficiency of envelope-based algorithms will be also discussed quantitatively for the first time in comparison with the averaged cyclic periodogram. Finally, the algorithms will be validated with numerical and experimental examples.
Resumo:
Particle filters find important applications in the problems of state and parameter estimations of dynamical systems of engineering interest. Since a typical filtering algorithm involves Monte Carlo simulations of the process equations, sample variance of the estimator is inversely proportional to the number of particles. The sample variance may be reduced if one uses a Rao-Blackwell marginalization of states and performs analytical computations as much as possible. In this work, we propose a semi-analytical particle filter, requiring no Rao-Blackwell marginalization, for state and parameter estimations of nonlinear dynamical systems with additively Gaussian process/observation noises. Through local linearizations of the nonlinear drift fields in the process/observation equations via explicit Ito-Taylor expansions, the given nonlinear system is transformed into an ensemble of locally linearized systems. Using the most recent observation, conditionally Gaussian posterior density functions of the linearized systems are analytically obtained through the Kalman filter. This information is further exploited within the particle filter algorithm for obtaining samples from the optimal posterior density of the states. The potential of the method in state/parameter estimations is demonstrated through numerical illustrations for a few nonlinear oscillators. The proposed filter is found to yield estimates with reduced sample variance and improved accuracy vis-a-vis results from a form of sequential importance sampling filter.
Resumo:
We have evaluated techniques of estimating animal density through direct counts using line transects during 1988-92 in the tropical deciduous forests of Mudumalai Sanctuary in southern India for four species of large herbivorous mammals, namely, chital (Axis axis), sambar (Cervus unicolor), Asian elephant (Elephas maximus) and gaur (Bos gauras). Density estimates derived from the Fourier Series and the Half-Normal models consistently had the lowest coefficient of variation. These two models also generated similar mean density estimates. For the Fourier Series estimator, appropriate cut-off widths for analysing line transect data for the four species are suggested. Grouping data into various distance classes did not produce any appreciable differences in estimates of mean density or their variances, although model fit is generally better when data are placed in fewer groups. The sampling effort needed to achieve a desired precision (coefficient of variation) in the density estimate is derived. A sampling effort of 800 km of transects returned a 10% coefficient of variation on estimate for chital; for the other species a higher effort was needed to achieve this level of precision. There was no statistically significant relationship between detectability of a group and the size of the group for any species. Density estimates along roads were generally significantly different from those in the interior af the forest, indicating that road-side counts may not be appropriate for most species.