980 resultados para Monte Carlo


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper demonstrates the procedures for probabilistic assessment of a pesticide fate and transport model, PCPF-1, to elucidate the modeling uncertainty using the Monte Carlo technique. Sensitivity analyses are performed to investigate the influence of herbicide characteristics and related soil properties on model outputs using four popular rice herbicides: mefenacet, pretilachlor, bensulfuron-methyl and imazosulfuron. Uncertainty quantification showed that the simulated concentrations in paddy water varied more than those of paddy soil. This tendency decreased as the simulation proceeded to a later period but remained important for herbicides having either high solubility or a high 1st-order dissolution rate. The sensitivity analysis indicated that PCPF-1 parameters requiring careful determination are primarily those involve with herbicide adsorption (the organic carbon content, the bulk density and the volumetric saturated water content), secondary parameters related with herbicide mass distribution between paddy water and soil (1st-order desorption and dissolution rates) and lastly, those involving herbicide degradations. © Pesticide Science Society of Japan.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Uncertainty assessments of herbicide losses from rice paddies in Japan associated with local meteorological conditions and water management practices were performed using a pesticide fate and transport model, PCPF-1, under the Monte Carlo (MC) simulation scheme. First, MC simulations were conducted for five different cities with a prescribed water management scenario and a 10-year meteorological dataset of each city. The effectiveness of water management was observed regarding the reduction of pesticide runoff. However, a greater potential of pesticide runoff remained in Western Japan. Secondly, an extended analysis was attempted to evaluate the effects of local water management and meteorological conditions between the Chikugo River basin and the Sakura River basin using uncertainty inputs processed from observed water management data. The results showed that because of more severe rainfall events, significant pesticide runoff occurred in the Chikugo River basin even when appropriate irrigation practices were implemented. © Pesticide Science Society of Japan.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Monitoring studies revealed high concentrations of pesticides in the drainage canal of paddy fields. It is important to have a way to predict these concentrations in different management scenarios as an assessment tool. A simulation model for predicting the pesticide concentration in a paddy block (PCPF-B) was evaluated and then used to assess the effect of water management practices for controlling pesticide runoff from paddy fields. RESULTS: The PCPF-B model achieved an acceptable performance. The model was applied to a constrained probabilistic approach using the Monte Carlo technique to evaluate the best management practices for reducing runoff of pretilachlor into the canal. The probabilistic model predictions using actual data of pesticide use and hydrological data in the canal showed that the water holding period (WHP) and the excess water storage depth (EWSD) effectively reduced the loss and concentration of pretilachlor from paddy fields to the drainage canal. The WHP also reduced the timespan of pesticide exposure in the drainage canal. CONCLUSIONS: It is recommended that: (1) the WHP be applied for as long as possible, but for at least 7 days, depending on the pesticide and field conditions; (2) an EWSD greater than 2 cm be maintained to store substantial rainfall in order to prevent paddy runoff, especially during the WHP.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neutron diffraction measurement is carried out on GexSe1-x glasses, where 0.1 less than or equal to x less than or equal to 0.4, in a Q interval of 0.55-13.8 Angstrom(-1). The first sharp diffraction peak (FSDP) in the structure factor, S(Q), shows a systematic increase in the intensity and shifts to a lower Q with increasing Ge concentration. The coherence length of FSDP increases with x and becomes maximum for 0.33 less than or equal to x less than or equal to 0.4. The Monte-Carlo method, due to Soper, is used to generate S(Q) and also the pair correlation function, g(r). The generated S(Q) is in agreement with the experimental data for all x. Analysis of the first four peaks in the total correlation function, T(r), shows that the short range order in GeSe2 glass is due to Ge(Se-1/2)(4) tetrahedra, in agreement with earlier reports. Se-rich glasses contain Se-chains which are cross-linked with Ge(Se-1/2)(4) tetrahedra. Ge-2(Se-1/2)(6) molecular units are the basic structural units in Ge-rich, x = 0.4, glass. For x = 0.2, 0.33 and 0.4 there is evidence for some of the tetrahedra being in an edge-shared configuration. The number of edge-shared tetrahedra in these glasses increase with increasing Ge content.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider a two timescale model of learning by economic agents wherein active or 'ontogenetic' learning by individuals takes place on a fast scale and passive or 'phylogenetic' learning by society as a whole on a slow scale, each affecting the evolution of the other. The former is modelled by the Monte Carlo dynamics of physics, while the latter is modelled by the replicator dynamics of evolutionary biology. Various qualitative aspects of the dynamics are studied in some simple cases, both analytically and numerically, and its role as a useful modelling device is emphasized.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The topography of the free energy landscape in phase space of a dense hard-sphere system characterized by a discretized free energy functional of the Ramakishnan-Yussouff form is investigated numerically using a specially devised Monte Carlo procedure. We locate a considerable number of glassy local minima of the free energy and analyze the distributions of the free energy at a minimum and an appropriately defined phase-space "distance" between different minima. We find evidence for the existence of pairs of closely related glassy minima("two-level systems"). We also investigate the way the system makes transitions as it moves from the basin of attraction of a minimum to that of another one after a start under nonequilibrium conditions. This allows us to determine the effective height of free energy barriers that separate a glassy minimum from the others. The dependence of the height of free energy barriers on the density is investigated in detail. The general appearance of the free energy landscape resembles that of a putting green: relatively deep minima separated by a fairly flat structure. We discuss the connection of our results with the Vogel-Fulcher law and relate our observations to other work on the glass transition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study by means of experiments and Monte Carlo simulations, the scattering of light in random media, to determine the distance up to which photons travel along almost undeviated paths within a scattering medium, and are therefore capable of casting a shadow of an opaque inclusion embedded within the medium. Such photons are isolated by polarisation discrimination wherein the plane of linear polarisation of the input light is continuously rotated and the polarisation preserving component of the emerging light is extracted by means of a Fourier transform. This technique is a software implementation of lock-in detection. We find that images may be recovered to a depth far in excess of that predicted by the diffusion theory of photon propagation. To understand our experimental results, we perform Monte Carlo simulations to model the random walk behaviour of the multiply scattered photons. We present a. new definition of a diffusing photon in terms of the memory of its initial direction of propagation, which we then quantify in terms of an angular correlation function. This redefinition yields the penetration depth of the polarisation preserving photons. Based on these results, we have formulated a model to understand shadow formation in a turbid medium, the predictions of which are in good agreement with our experimental results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Possible integration of Single Electron Transistor (SET) with CMOS technology is making the study of semiconductor SET more important than the metallic SET and consequently, the study of energy quantization effects on semiconductor SET devices and circuits is gaining significance. In this paper, for the first time, the effects of energy quantization on SET inverter performance are examined through analytical modeling and Monte Carlo simulations. It is observed that the primary effect of energy quantization is to change the Coulomb Blockade region and drain current of SET devices and as a result affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. It is shown that SET inverter designed with CT : CG = 1/3 (where CT and CG are tunnel junction and gate capacitances respectively) offers maximum robustness against energy quantization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The LISA Parameter Estimation Taskforce was formed in September 2007 to provide the LISA Project with vetted codes, source distribution models and results related to parameter estimation. The Taskforce's goal is to be able to quickly calculate the impact of any mission design changes on LISA's science capabilities, based on reasonable estimates of the distribution of astrophysical sources in the universe. This paper describes our Taskforce's work on massive black-hole binaries (MBHBs). Given present uncertainties in the formation history of MBHBs, we adopt four different population models, based on (i) whether the initial black-hole seeds are small or large and (ii) whether accretion is efficient or inefficient at spinning up the holes. We compare four largely independent codes for calculating LISA's parameter-estimation capabilities. All codes are based on the Fisher-matrix approximation, but in the past they used somewhat different signal models, source parametrizations and noise curves. We show that once these differences are removed, the four codes give results in extremely close agreement with each other. Using a code that includes both spin precession and higher harmonics in the gravitational-wave signal, we carry out Monte Carlo simulations and determine the number of events that can be detected and accurately localized in our four population models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The contemporary methodology for growth models of organisms is based on continuous trajectories and thus it hinders us from modelling stepwise growth in crustacean populations. Growth models for fish are normally assumed to follow a continuous function, but a different type of model is needed for crustacean growth. Crustaceans must moult in order for them to grow. The growth of crustaceans is a discontinuous process due to the periodical shedding of the exoskeleton in moulting. The stepwise growth of crustaceans through the moulting process makes the growth estimation more complex. Stochastic approaches can be used to model discontinuous growth or what are commonly known as "jumps" (Figure 1). However, in stochastic growth model we need to ensure that the stochastic growth model results in only positive jumps. In view of this, we will introduce a subordinator that is a special case of a Levy process. A subordinator is a non-decreasing Levy process, that will assist in modelling crustacean growth for better understanding of the individual variability and stochasticity in moulting periods and increments. We develop the estimation methods for parameter estimation and illustrate them with the help of a dataset from laboratory experiments. The motivational dataset is from the ornate rock lobster, Panulirus ornatus, which can be found between Australia and Papua New Guinea. Due to the presence of sex effects on the growth (Munday et al., 2004), we estimate the growth parameters separately for each sex. Since all hard parts are shed too often, the exact age determination of a lobster can be challenging. However, the growth parameters for the aforementioned moult processes from tank data being able to estimate through: (i) inter-moult periods, and (ii) moult increment. We will attempt to derive a joint density, which is made up of two functions: one for moult increments and the other for time intervals between moults. We claim these functions are conditionally independent given pre-moult length and the inter-moult periods. The variables moult increments and inter-moult periods are said to be independent because of the Markov property or conditional probability. Hence, the parameters in each function can be estimated separately. Subsequently, we integrate both of the functions through a Monte Carlo method. We can therefore obtain a population mean for crustacean growth (e. g. red curve in Figure 1). [GRAPHICS]

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A modeling paradigm is proposed for covariate, variance and working correlation structure selection for longitudinal data analysis. Appropriate selection of covariates is pertinent to correct variance modeling and selecting the appropriate covariates and variance function is vital to correlation structure selection. This leads to a stepwise model selection procedure that deploys a combination of different model selection criteria. Although these criteria find a common theoretical root based on approximating the Kullback-Leibler distance, they are designed to address different aspects of model selection and have different merits and limitations. For example, the extended quasi-likelihood information criterion (EQIC) with a covariance penalty performs well for covariate selection even when the working variance function is misspecified, but EQIC contains little information on correlation structures. The proposed model selection strategies are outlined and a Monte Carlo assessment of their finite sample properties is reported. Two longitudinal studies are used for illustration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Urban encroachment on dense, coastal koala populations has ensured that their management has received increasing government and public attention. The recently developed National Koala Conservation Strategy calls for maintenance of viable populations in the wild. Yet the success of this, and other, conservation initiatives is hampered by lack of reliable and generally accepted national and regional population estimates. In this paper we address this problem in a potentially large, but poorly studied, regional population in the State that is likely to have the largest wild populations. We draw on findings from previous reports in this series and apply the faecal standing-crop method (FSCM) to derive a regional estimate of more than 59 000 individuals. Validation trials in riverine communities showed that estimates of animal density obtained from the FSCM and direct observation were in close agreement. Bootstrapping and Monte Carlo simulations were used to obtain variance estimates for our population estimates in different vegetation associations across the region. The most favoured habitat was riverine vegetation, which covered only 0.9% of the region but supported 45% of the koalas. We also estimated that between 1969 and 1995 -30% of the native vegetation associations that are considered as potential koala habitat were cleared, leading to a decline of perhaps 10% in koala numbers. Management of this large regional population has significant implications for the national conservation of the species: the continued viability of this population is critically dependent on the retention and management of riverine and residual vegetation communities, and future vegetation-management guidelines should be cognisant of the potential impacts of clearing even small areas of critical habitat. We also highlight eight management implications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many statistical forecast systems are available to interested users. In order to be useful for decision-making, these systems must be based on evidence of underlying mechanisms. Once causal connections between the mechanism and their statistical manifestation have been firmly established, the forecasts must also provide some quantitative evidence of `quality’. However, the quality of statistical climate forecast systems (forecast quality) is an ill-defined and frequently misunderstood property. Often, providers and users of such forecast systems are unclear about what ‘quality’ entails and how to measure it, leading to confusion and misinformation. Here we present a generic framework to quantify aspects of forecast quality using an inferential approach to calculate nominal significance levels (p-values) that can be obtained either by directly applying non-parametric statistical tests such as Kruskal-Wallis (KW) or Kolmogorov-Smirnov (KS) or by using Monte-Carlo methods (in the case of forecast skill scores). Once converted to p-values, these forecast quality measures provide a means to objectively evaluate and compare temporal and spatial patterns of forecast quality across datasets and forecast systems. Our analysis demonstrates the importance of providing p-values rather than adopting some arbitrarily chosen significance levels such as p < 0.05 or p < 0.01, which is still common practice. This is illustrated by applying non-parametric tests (such as KW and KS) and skill scoring methods (LEPS and RPSS) to the 5-phase Southern Oscillation Index classification system using historical rainfall data from Australia, The Republic of South Africa and India. The selection of quality measures is solely based on their common use and does not constitute endorsement. We found that non-parametric statistical tests can be adequate proxies for skill measures such as LEPS or RPSS. The framework can be implemented anywhere, regardless of dataset, forecast system or quality measure. Eventually such inferential evidence should be complimented by descriptive statistical methods in order to fully assist in operational risk management.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pseudo-marginal methods such as the grouped independence Metropolis-Hastings (GIMH) and Markov chain within Metropolis (MCWM) algorithms have been introduced in the literature as an approach to perform Bayesian inference in latent variable models. These methods replace intractable likelihood calculations with unbiased estimates within Markov chain Monte Carlo algorithms. The GIMH method has the posterior of interest as its limiting distribution, but suffers from poor mixing if it is too computationally intensive to obtain high-precision likelihood estimates. The MCWM algorithm has better mixing properties, but less theoretical support. In this paper we propose to use Gaussian processes (GP) to accelerate the GIMH method, whilst using a short pilot run of MCWM to train the GP. Our new method, GP-GIMH, is illustrated on simulated data from a stochastic volatility and a gene network model.