901 resultados para stochastic simulation method
Resumo:
In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.
Resumo:
Our new simple method for calculating accurate Franck-Condon factors including nondiagonal (i.e., mode-mode) anharmonic coupling is used to simulate the C2H4+X2B 3u←C2H4X̃1 Ag band in the photoelectron spectrum. An improved vibrational basis set truncation algorithm, which permits very efficient computations, is employed. Because the torsional mode is highly anharmonic it is separated from the other modes and treated exactly. All other modes are treated through the second-order perturbation theory. The perturbation-theory corrections are significant and lead to a good agreement with experiment, although the separability assumption for torsion causes the C2 D4 results to be not as good as those for C2 H4. A variational formulation to overcome this circumstance, and deal with large anharmonicities in general, is suggested
Resumo:
Biochemical systems are commonly modelled by systems of ordinary differential equations (ODEs). A particular class of such models called S-systems have recently gained popularity in biochemical system modelling. The parameters of an S-system are usually estimated from time-course profiles. However, finding these estimates is a difficult computational problem. Moreover, although several methods have been recently proposed to solve this problem for ideal profiles, relatively little progress has been reported for noisy profiles. We describe a special feature of a Newton-flow optimisation problem associated with S-system parameter estimation. This enables us to significantly reduce the search space, and also lends itself to parameter estimation for noisy data. We illustrate the applicability of our method by applying it to noisy time-course data synthetically produced from previously published 4- and 30-dimensional S-systems. In addition, we propose an extension of our method that allows the detection of network topologies for small S-systems. We introduce a new method for estimating S-system parameters from time-course profiles. We show that the performance of this method compares favorably with competing methods for ideal profiles, and that it also allows the determination of parameters for noisy profiles.
Resumo:
We present a new method for constructing exact distribution-free tests (and confidence intervals) for variables that can generate more than two possible outcomes.This method separates the search for an exact test from the goal to create a non-randomized test. Randomization is used to extend any exact test relating to meansof variables with finitely many outcomes to variables with outcomes belonging to agiven bounded set. Tests in terms of variance and covariance are reduced to testsrelating to means. Randomness is then eliminated in a separate step.This method is used to create confidence intervals for the difference between twomeans (or variances) and tests of stochastic inequality and correlation.
Resumo:
The paper develops a method to solve higher-dimensional stochasticcontrol problems in continuous time. A finite difference typeapproximation scheme is used on a coarse grid of low discrepancypoints, while the value function at intermediate points is obtainedby regression. The stability properties of the method are discussed,and applications are given to test problems of up to 10 dimensions.Accurate solutions to these problems can be obtained on a personalcomputer.
Resumo:
This paper studies the rate of convergence of an appropriatediscretization scheme of the solution of the Mc Kean-Vlasovequation introduced by Bossy and Talay. More specifically,we consider approximations of the distribution and of thedensity of the solution of the stochastic differentialequation associated to the Mc Kean - Vlasov equation. Thescheme adopted here is a mixed one: Euler/weakly interactingparticle system. If $n$ is the number of weakly interactingparticles and $h$ is the uniform step in the timediscretization, we prove that the rate of convergence of thedistribution functions of the approximating sequence in the $L^1(\Omega\times \Bbb R)$ norm and in the sup norm is of theorder of $\frac 1{\sqrt n} + h $, while for the densities is ofthe order $ h +\frac 1 {\sqrt {nh}}$. This result is obtainedby carefully employing techniques of Malliavin Calculus.
Resumo:
In this paper I explore the issue of nonlinearity (both in the datageneration process and in the functional form that establishes therelationship between the parameters and the data) regarding the poorperformance of the Generalized Method of Moments (GMM) in small samples.To this purpose I build a sequence of models starting with a simple linearmodel and enlarging it progressively until I approximate a standard (nonlinear)neoclassical growth model. I then use simulation techniques to find the smallsample distribution of the GMM estimators in each of the models.
Resumo:
We develop a general error analysis framework for the Monte Carlo simulationof densities for functionals in Wiener space. We also study variancereduction methods with the help of Malliavin derivatives. For this, wegive some general heuristic principles which are applied to diffusionprocesses. A comparison with kernel density estimates is made.
Resumo:
The Network Revenue Management problem can be formulated as a stochastic dynamic programming problem (DP or the\optimal" solution V *) whose exact solution is computationally intractable. Consequently, a number of heuristics have been proposed in the literature, the most popular of which are the deterministic linear programming (DLP) model, and a simulation based method, the randomized linear programming (RLP) model. Both methods give upper bounds on the optimal solution value (DLP and PHLP respectively). These bounds are used to provide control values that can be used in practice to make accept/deny decisions for booking requests. Recently Adelman [1] and Topaloglu [18] have proposed alternate upper bounds, the affine relaxation (AR) bound and the Lagrangian relaxation (LR) bound respectively, and showed that their bounds are tighter than the DLP bound. Tight bounds are of great interest as it appears from empirical studies and practical experience that models that give tighter bounds also lead to better controls (better in the sense that they lead to more revenue). In this paper we give tightened versions of three bounds, calling themsAR (strong Affine Relaxation), sLR (strong Lagrangian Relaxation) and sPHLP (strong Perfect Hindsight LP), and show relations between them. Speciffically, we show that the sPHLP bound is tighter than sLR bound and sAR bound is tighter than the LR bound. The techniques for deriving the sLR and sPHLP bounds can potentially be applied to other instances of weakly-coupled dynamic programming.
Resumo:
The computer code system PENELOPE (version 2008) performs Monte Carlo simulation of coupledelectron-photon transport in arbitrary materials for a wide energy range, from a few hundred eV toabout 1 GeV. Photon transport is simulated by means of the standard, detailed simulation scheme.Electron and positron histories are generated on the basis of a mixed procedure, which combinesdetailed simulation of hard events with condensed simulation of soft interactions. A geometry packagecalled PENGEOM permits the generation of random electron-photon showers in material systemsconsisting of homogeneous bodies limited by quadric surfaces, i.e., planes, spheres, cylinders, etc. Thisreport is intended not only to serve as a manual of the PENELOPE code system, but also to provide theuser with the necessary information to understand the details of the Monte Carlo algorithm.
Resumo:
We perform direct numerical simulations of drainage by solving Navier- Stokes equations in the pore space and employing the Volume Of Fluid (VOF) method to track the evolution of the fluid-fluid interface. After demonstrating that the method is able to deal with large viscosity contrasts and to model the transition from stable flow to viscous fingering, we focus on the definition of macroscopic capillary pressure. When the fluids are at rest, the difference between inlet and outlet pressures and the difference between the intrinsic phase average pressure coincide with the capillary pressure. However, when the fluids are in motion these quantities are dominated by viscous forces. In this case, only a definition based on the variation of the interfacial energy provides an accurate measure of the macroscopic capillary pressure and allows separating the viscous from the capillary pressure components.
Resumo:
We have modeled numerically the seismic response of a poroelastic inclusion with properties applicable to an oil reservoir that interacts with an ambient wavefield. The model includes wave-induced fluid flow caused by pressure differences between mesoscopic-scale (i.e., in the order of centimeters to meters) heterogeneities. We used a viscoelastic approximation on the macroscopic scale to implement the attenuation and dispersion resulting from this mesoscopic-scale theory in numerical simulations of wave propagation on the kilometer scale. This upscaling method includes finite-element modeling of wave-induced fluid flow to determine effective seismic properties of the poroelastic media, such as attenuation of P- and S-waves. The fitted, equivalent, viscoelastic behavior is implemented in finite-difference wave propagation simulations. With this two-stage process, we model numerically the quasi-poroelastic wave-propagation on the kilometer scale and study the impact of fluid properties and fluid saturation on the modeled seismic amplitudes. In particular, we addressed the question of whether poroelastic effects within an oil reservoir may be a plausible explanation for low-frequency ambient wavefield modifications observed at oil fields in recent years. Our results indicate that ambient wavefield modification is expected to occur for oil reservoirs exhibiting high attenuation. Whether or not such modifications can be detected in surface recordings, however, will depend on acquisition design and noise mitigation processing as well as site-specific conditions, such as the geologic complexity of the subsurface, the nature of the ambient wavefield, and the amount of surface noise.
Resumo:
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in cylindrical coordinates. An important application of this method is the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh consisting of three concentric domains representing the borehole fluid in the center, the borehole casing and the surrounding porous formation. The spatial discretization is based on a Chebyshev expansion in the radial direction, Fourier expansions in the other directions, and a Runge-Kutta integration scheme for the time evolution. A domain decomposition method based on the method of characteristics is used to match the boundary conditions at the fluid/porous-solid and porous-solid/porous-solid interfaces. The viability and accuracy of the proposed method has been tested and verified in 2D polar coordinates through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. The proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is handled adequately.
Resumo:
This contribution builds upon a former paper by the authors (Lipps and Betz 2004), in which a stochastic population projection for East- and West Germany is performed. Aim was to forecast relevant population parameters and their distribution in a consistent way. We now present some modifications, which have been modelled since. First, population parameters for the entire German population are modelled. In order to overcome the modelling problem of the structural break in the East during reunification, we show that the adaptation process of the relevant figures by the East can be considered to be completed by now. As a consequence, German parameters can be modelled just by using the West German historic patterns, with the start-off population of entire Germany. Second, a new model to simulate age specific fertility rates is presented, based on a quadratic spline approach. This offers a higher flexibility to model various age specific fertility curves. The simulation results are compared with the scenario based official forecasts for Germany in 2050. Exemplary for some population parameters (e.g. dependency ratio), it can be shown that the range spanned by the medium and extreme variants correspond to the s-intervals in the stochastic framework. It seems therefore more appropriate to treat this range as a s-interval covering about two thirds of the true distribution.