958 resultados para stochastic numerical methods
Resumo:
In this paper we present a finite difference method for solving two-dimensional viscoelastic unsteady free surface flows governed by the single equation version of the eXtended Pom-Pom (XPP) model. The momentum equations are solved by a projection method which uncouples the velocity and pressure fields. We are interested in low Reynolds number flows and, to enhance the stability of the numerical method, an implicit technique for computing the pressure condition on the free surface is employed. This strategy is invoked to solve the governing equations within a Marker-and-Cell type approach while simultaneously calculating the correct normal stress condition on the free surface. The numerical code is validated by performing mesh refinement on a two-dimensional channel flow. Numerical results include an investigation of the influence of the parameters of the XPP equation on the extrudate swelling ratio and the simulation of the Barus effect for XPP fluids. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper considers the stability of explicit, implicit and Crank-Nicolson schemes for the one-dimensional heat equation on a staggered grid. Furthemore, we consider the cases when both explicit and implicit approximations of the boundary conditions arc employed. Why we choose to do this is clearly motivated and arises front solving fluid flow equations with free surfaces when the Reynolds number can be very small. in at least parts of the spatial domain. A comprehensive stability analysis is supplied: a novel result is the precise stability restriction on the Crank-Nicolson method when the boundary conditions are approximated explicitly, that is, at t =n delta t rather than t = (n + 1)delta t. The two-dimensional Navier-Stokes equations were then solved by a marker and cell approach for two simple problems that had analytic solutions. It was found that the stability results provided in this paper were qualitatively very similar. thereby providing insight as to why a Crank-Nicolson approximation of the momentum equations is only conditionally, stable. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
The critical behavior of the stochastic susceptible-infected-recovered model on a square lattice is obtained by numerical simulations and finite-size scaling. The order parameter as well as the distribution in the number of recovered individuals is determined as a function of the infection rate for several values of the system size. The analysis around criticality is obtained by exploring the close relationship between the present model and standard percolation theory. The quantity UP, equal to the ratio U between the second moment and the squared first moment of the size distribution multiplied by the order parameter P, is shown to have, for a square system, a universal value 1.0167(1) that is the same for site and bond percolation, confirming further that the SIR model is also in the percolation class.
Resumo:
We investigate the critical behavior of a stochastic lattice model describing a predator-prey system. By means of Monte Carlo procedure we simulate the model defined on a regular square lattice and determine the threshold of species coexistence, that is, the critical phase boundaries related to the transition between an active state, where both species coexist and an absorbing state where one of the species is extinct. A finite size scaling analysis is employed to determine the order parameter, order parameter fluctuations, correlation length and the critical exponents. Our numerical results for the critical exponents agree with those of the directed percolation universality class. We also check the validity of the hyperscaling relation and present the data collapse curves.
Resumo:
We have studied by numerical simulations the relaxation of the stochastic seven-state Potts model after a quench from a high temperature down to a temperature below the first-order transition. For quench temperatures just below the transition temperature the phase ordering occurs by simple coarsening under the action of surface tension. For sufficient low temperatures however the straightening of the interface between domains drives the system toward a metastable disordered state, identified as a glassy state. Escaping from this state occurs, if the quench temperature is nonzero, by a thermal activated dynamics that eventually drives the system toward the equilibrium state. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We study by numerical simulations the time correlation function of a stochastic lattice model describing the dynamics of coexistence of two interacting biological species that present time cycles in the number of species individuals. Its asymptotic behavior is shown to decrease in time as a sinusoidal exponential function from which we extract the dominant eigenvalue of the evolution operator related to the stochastic dynamics showing that it is complex with the imaginary part being the frequency of the population cycles. The transition from the oscillatory to the nonoscillatory behavior occurs when the asymptotic behavior of the time correlation function becomes a pure exponential, that is, when the real part of the complex eigenvalue equals a real eigenvalue. We also show that the amplitude of the undamped oscillations increases with the square root of the area of the habitat as ordinary random fluctuations. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Radial transport in the tokamap, which has been proposed as a simple model for the motion in a stochastic plasma, is investigated. A theory for previous numerical findings is presented. The new results are stimulated by the fact that the radial diffusion coefficients is space-dependent. The space-dependence of the transport coefficient has several interesting effects which have not been elucidated so far. Among the new findings are the analytical predictions for the scaling of the mean radial displacement with time and the relation between the Fokker-Planck diffusion coefficient and the diffusion coefficient from the mean square displacement. The applicability to other systems is also discussed. (c) 2009 WILEY-VCH GmbH & Co. KGaA, Weinheim
Resumo:
A Nonlinear Programming algorithm that converges to second-order stationary points is introduced in this paper. The main tool is a second-order negative-curvature method for box-constrained minimization of a certain class of functions that do not possess continuous second derivatives. This method is used to define an Augmented Lagrangian algorithm of PHR (Powell-Hestenes-Rockafellar) type. Convergence proofs under weak constraint qualifications are given. Numerical examples showing that the new method converges to second-order stationary points in situations in which first-order methods fail are exhibited.
Resumo:
Two Augmented Lagrangian algorithms for solving KKT systems are introduced. The algorithms differ in the way in which penalty parameters are updated. Possibly infeasible accumulation points are characterized. It is proved that feasible limit points that satisfy the Constant Positive Linear Dependence constraint qualification are KKT solutions. Boundedness of the penalty parameters is proved under suitable assumptions. Numerical experiments are presented.
Resumo:
Mathematical models, as instruments for understanding the workings of nature, are a traditional tool of physics, but they also play an ever increasing role in biology - in the description of fundamental processes as well as that of complex systems. In this review, the authors discuss two examples of the application of group theoretical methods, which constitute the mathematical discipline for a quantitative description of the idea of symmetry, to genetics. The first one appears, in the form of a pseudo-orthogonal (Lorentz like) symmetry, in the stochastic modelling of what may be regarded as the simplest possible example of a genetic network and, hopefully, a building block for more complicated ones: a single self-interacting or externally regulated gene with only two possible states: ` on` and ` off`. The second is the algebraic approach to the evolution of the genetic code, according to which the current code results from a dynamical symmetry breaking process, starting out from an initial state of complete symmetry and ending in the presently observed final state of low symmetry. In both cases, symmetry plays a decisive role: in the first, it is a characteristic feature of the dynamics of the gene switch and its decay to equilibrium, whereas in the second, it provides the guidelines for the evolution of the coding rules.
Resumo:
This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision. Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes. The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).
Resumo:
The regimen of environmental flows (EF) must be included as terms of environmental demand in the management of water resources. Even though there are numerous methods for the computation of EF, the criteria applied at different steps in the calculation process are quite subjective whereas the results are fixed values that must be meet by water planners. This study presents a friendly-user tool for the assessment of the probability of compliance of a certain EF scenario with the natural regimen in a semiarid area in southern Spain. 250 replications of a 25-yr period of different hydrological variables (rainfall, minimum and maximum flows, ...) were obtained at the study site from the combination of Monte Carlo technique and local hydrological relationships. Several assumptions are made such as the independence of annual rainfall from year to year and the variability of occurrence of the meteorological agents, mainly precipitation as the main source of uncertainty. Inputs to the tool are easily selected from a first menu and comprise measured rainfall data, EF values and the hydrological relationships for at least a 20-yr period. The outputs are the probabilities of compliance of the different components of the EF for the study period. From this, local optimization can be applied to establish EF components with a certain level of compliance in the study period. Different options for graphic output and analysis of results are included in terms of graphs and tables in several formats. This methodology turned out to be a useful tool for the implementation of an uncertainty analysis within the scope of environmental flows in water management and allowed the simulation of the impacts of several water resource development scenarios in the study site.
Resumo:
This paper considers the general problem of Feasible Generalized Least Squares Instrumental Variables (FG LS IV) estimation using optimal instruments. First we summarize the sufficient conditions for the FG LS IV estimator to be asymptotic ally equivalent to an optimal G LS IV estimator. Then we specialize to stationary dynamic systems with stationary VAR errors, and use the sufficient conditions to derive new moment conditions for these models. These moment conditions produce useful IVs from the lagged endogenous variables, despite the correlation between errors and endogenous variables. This use of the information contained in the lagged endogenous variables expands the class of IV estimators under consideration and there by potentially improves both asymptotic and small-sample efficiency of the optimal IV estimator in the class. Some Monte Carlo experiments compare the new methods with those of Hatanaka [1976]. For the DG P used in the Monte Carlo experiments, asymptotic efficiency is strictly improved by the new IVs, and experimental small-sample efficiency is improved as well.
Resumo:
We consider risk-averse convex stochastic programs expressed in terms of extended polyhedral risk measures. We derive computable con dence intervals on the optimal value of such stochastic programs using the Robust Stochastic Approximation and the Stochastic Mirror Descent (SMD) algorithms. When the objective functions are uniformly convex, we also propose a multistep extension of the Stochastic Mirror Descent algorithm and obtain con dence intervals on both the optimal values and optimal solutions. Numerical simulations show that our con dence intervals are much less conservative and are quicker to compute than previously obtained con dence intervals for SMD and that the multistep Stochastic Mirror Descent algorithm can obtain a good approximate solution much quicker than its nonmultistep counterpart. Our con dence intervals are also more reliable than asymptotic con dence intervals when the sample size is not much larger than the problem size.
Resumo:
We discuss a general approach to building non-asymptotic confidence bounds for stochastic optimization problems. Our principal contribution is the observation that a Sample Average Approximation of a problem supplies upper and lower bounds for the optimal value of the problem which are essentially better than the quality of the corresponding optimal solutions. At the same time, such bounds are more reliable than “standard” confidence bounds obtained through the asymptotic approach. We also discuss bounding the optimal value of MinMax Stochastic Optimization and stochastically constrained problems. We conclude with a small simulation study illustrating the numerical behavior of the proposed bounds.