92 resultados para Stochastic simulation algorithm
Resumo:
A methodology termed the “filtered density function” (FDF) is developed and implemented for large eddy simulation (LES) of chemically reacting turbulent flows. In this methodology, the effects of the unresolved scalar fluctuations are taken into account by considering the probability density function (PDF) of subgrid scale (SGS) scalar quantities. A transport equation is derived for the FDF in which the effect of chemical reactions appears in a closed form. The influences of scalar mixing and convection within the subgrid are modeled. The FDF transport equation is solved numerically via a Lagrangian Monte Carlo scheme in which the solutions of the equivalent stochastic differential equations (SDEs) are obtained. These solutions preserve the Itô-Gikhman nature of the SDEs. The consistency of the FDF approach, the convergence of its Monte Carlo solution and the performance of the closures employed in the FDF transport equation are assessed by comparisons with results obtained by direct numerical simulation (DNS) and by conventional LES procedures in which the first two SGS scalar moments are obtained by a finite difference method (LES-FD). These comparative assessments are conducted by implementations of all three schemes (FDF, DNS and LES-FD) in a temporally developing mixing layer and a spatially developing planar jet under both non-reacting and reacting conditions. In non-reacting flows, the Monte Carlo solution of the FDF yields results similar to those via LES-FD. The advantage of the FDF is demonstrated by its use in reacting flows. In the absence of a closure for the SGS scalar fluctuations, the LES-FD results are significantly different from those based on DNS. The FDF results show a much closer agreement with filtered DNS results. © 1998 American Institute of Physics.
Resumo:
We study the problem of optimal bandwidth allocation in communication networks. We consider a queueing model with two queues to which traffic from different competing flows arrive. The queue length at the buffers is observed every T instants of time, on the basis of which a decision on the amount of bandwidth to be allocated to each buffer for the next T instants is made. We consider a class of closed-loop feedback policies for the system and use a twotimescale simultaneous perturbation stochastic approximation(SPSA) algorithm to find an optimal policy within the prescribed class. We study the performance of the proposed algorithm on a numerical setting. Our algorithm is found to exhibit good performance.
Resumo:
We develop a simulation based algorithm for finite horizon Markov decision processes with finite state and finite action space. Illustrative numerical experiments with the proposed algorithm are shown for problems in flow control of communication networks and capacity switching in semiconductor fabrication.
Resumo:
Biochemical pathways involving chemical kinetics in medium concentrations (i.e., at mesoscale) of the reacting molecules can be approximated as chemical Langevin equations (CLE) systems. We address the physically consistent non-negative simulation of the CLE sample paths as well as the issue of non-Lipschitz diffusion coefficients when a species approaches depletion and any stiffness due to faster reactions. The non-negative Fully Implicit Stochastic alpha (FIS alpha) method in which stopped reaction channels due to depleted reactants are deleted until a reactant concentration rises again, for non-negativity preservation and in which a positive definite Jacobian is maintained to deal with possible stiffness, is proposed and analysed. The method is illustrated with the computation of active Protein Kinase C response in the Protein Kinase C pathway. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
This paper describes the simulation of a control scheme using the principle of field orientation for the control of a voltage source inverter-fed induction motor. The control principle is explained, followed by an algorithm to simulate various components of the system in the digital computer. The dynamic response of the system for the load disturbance and set-point variations have been studied. Also, the results of the simulation showing the behavior of field coordinates for such disturbances are given.
Resumo:
In this article we review the current status in the modelling of both thermotropic and lyotropic Liquid crystal. We discuss various coarse-graining schemes as well as simulation techniques such as Monte Carlo (MC) and Molecular dynamics (MD) simulations.In the area of MC simulations we discuss in detail the algorithm for simulating hard objects such as spherocylinders of various aspect ratios where excluded volume interaction enters in the simulation through overlap test. We use this technique to study the phase diagram, of a special class of thermotropic liquid crystals namely banana liquid crystals. Next we discuss a coarse-grain model of surfactant molecules and study the self-assembly of the surfactant oligomers using MD simulations. Finally we discuss an atomistically informed coarse-grained description of the lipid molecules used to study the gel to liquid crystalline phase transition in the lipid bilayer system.
Resumo:
Image segmentation is formulated as a stochastic process whose invariant distribution is concentrated at points of the desired region. By choosing multiple seed points, different regions can be segmented. The algorithm is based on the theory of time-homogeneous Markov chains and has been largely motivated by the technique of simulated annealing. The method proposed here has been found to perform well on real-world clean as well as noisy images while being computationally far less expensive than stochastic optimisation techniques
Resumo:
We propose an iterative algorithm to simulate the dynamics generated by any n-qubit Hamiltonian. The simulation entails decomposing the unitary time evolution operator U (unitary) into a product of different time-step unitaries. The algorithm product-decomposes U in a chosen operator basis by identifying a certain symmetry of U that is intimately related to the number of gates in the decomposition. We illustrate the algorithm by first obtaining a polynomial decomposition in the Pauli basis of the n-qubit quantum state transfer unitary by Di Franco et al. [Phys. Rev. Lett. 101, 230502 (2008)] that transports quantum information from one end of a spin chain to the other, and then implement it in nuclear magnetic resonance to demonstrate that the decomposition is experimentally viable. We further experimentally test the resilience of the state transfer to static errors in the coupling parameters of the simulated Hamiltonian. This is done by decomposing and simulating the corresponding imperfect unitaries.
Resumo:
Channel-aware assignment of subchannels to users in the downlink of an OFDMA system requires extensive feedback of channel state information (CSI) to the base station. Since bandwidth is scarce, schemes that limit feedback are necessary. We develop a novel, low feedback, distributed splitting-based algorithm called SplitSelect to opportunistically assign each subchannel to its most suitable user. SplitSelect explicitly handles multiple access control aspects associated with CSI feedback, and scales well with the number of users. In it, according to a scheduling criterion, each user locally maintains a scheduling metric for each subchannel. The goal is to select, for each subchannel, the user with the highest scheduling metric. At any time, each user contends for the subchannel for which it has the largest scheduling metric among the unallocated subchannels. A tractable asymptotic analysis of a system with many users is central to SplitSelect's simple design. Extensive simulation results demonstrate the speed with which subchannels and users are paired. The net data throughput, when the time overhead of selection is accounted for, is shown to be substantially better than several schemes proposed in the literature. We also show how fairness and user prioritization can be ensured by suitably defining the scheduling metric.
Resumo:
Unlike zero-sum stochastic games, a difficult problem in general-sum stochastic games is to obtain verifiable conditions for Nash equilibria. We show in this paper that by splitting an associated non-linear optimization problem into several sub-problems, characterization of Nash equilibria in a general-sum discounted stochastic games is possible. Using the aforementioned sub-problems, we in fact derive a set of necessary and sufficient verifiable conditions (termed KKT-SP conditions) for a strategy-pair to result in Nash equilibrium. Also, we show that any algorithm which tracks the zero of the gradient of the Lagrangian of every sub-problem provides a Nash strategy-pair. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Given the increasing cost of designing and building new highway pavements, reliability analysis has become vital to ensure that a given pavement performs as expected in the field. Recognizing the importance of failure analysis to safety, reliability, performance, and economy, back analysis has been employed in various engineering applications to evaluate the inherent uncertainties of the design and analysis. The probabilistic back analysis method formulated on Bayes' theorem and solved using the Markov chain Monte Carlo simulation method with a Metropolis-Hastings algorithm has proved to be highly efficient to address this issue. It is also quite flexible and is applicable to any type of prior information. In this paper, this method has been used to back-analyze the parameters that influence the pavement life and to consider the uncertainty of the mechanistic-empirical pavement design model. The load-induced pavement structural responses (e.g., stresses, strains, and deflections) used to predict the pavement life are estimated using the response surface methodology model developed based on the results of linear elastic analysis. The failure criteria adopted for the analysis were based on the factor of safety (FOS), and the study was carried out for different sample sizes and jumping distributions to estimate the most robust posterior statistics. From the posterior statistics of the case considered, it was observed that after approximately 150 million standard axle load repetitions, the mean values of the pavement properties decrease as expected, with a significant decrease in the values of the elastic moduli of the expected layers. An analysis of the posterior statistics indicated that the parameters that contribute significantly to the pavement failure were the moduli of the base and surface layer, which is consistent with the findings from other studies. After the back analysis, the base modulus parameters show a significant decrease of 15.8% and the surface layer modulus a decrease of 3.12% in the mean value. The usefulness of the back analysis methodology is further highlighted by estimating the design parameters for specified values of the factor of safety. The analysis revealed that for the pavement section considered, a reliability of 89% and 94% can be achieved by adopting FOS values of 1.5 and 2, respectively. The methodology proposed can therefore be effectively used to identify the parameters that are critical to pavement failure in the design of pavements for specified levels of reliability. DOI: 10.1061/(ASCE)TE.1943-5436.0000455. (C) 2013 American Society of Civil Engineers.
Resumo:
Low-complexity near-optimal detection of large-MIMO signals has attracted recent research. Recently, we proposed a local neighborhood search algorithm, namely reactive tabu search (RTS) algorithm, as well as a factor-graph based belief propagation (BP) algorithm for low-complexity large-MIMO detection. The motivation for the present work arises from the following two observations on the above two algorithms: i) Although RTS achieved close to optimal performance for 4-QAM in large dimensions, significant performance improvement was still possible for higher-order QAM (e.g., 16-, 64-QAM). ii) BP also achieved near-optimal performance for large dimensions, but only for {±1} alphabet. In this paper, we improve the large-MIMO detection performance of higher-order QAM signals by using a hybrid algorithm that employs RTS and BP. In particular, motivated by the observation that when a detection error occurs at the RTS output, the least significant bits (LSB) of the symbols are mostly in error, we propose to first reconstruct and cancel the interference due to bits other than LSBs at the RTS output and feed the interference cancelled received signal to the BP algorithm to improve the reliability of the LSBs. The output of the BP is then fed back to RTS for the next iteration. Simulation results show that the proposed algorithm performs better than the RTS algorithm, and semi-definite relaxation (SDR) and Gaussian tree approximation (GTA) algorithms.
Resumo:
The random eigenvalue problem arises in frequency and mode shape determination for a linear system with uncertainties in structural properties. Among several methods of characterizing this random eigenvalue problem, one computationally fast method that gives good accuracy is a weak formulation using polynomial chaos expansion (PCE). In this method, the eigenvalues and eigenvectors are expanded in PCE, and the residual is minimized by a Galerkin projection. The goals of the current work are (i) to implement this PCE-characterized random eigenvalue problem in the dynamic response calculation under random loading and (ii) to explore the computational advantages and challenges. In the proposed method, the response quantities are also expressed in PCE followed by a Galerkin projection. A numerical comparison with a perturbation method and the Monte Carlo simulation shows that when the loading has a random amplitude but deterministic frequency content, the proposed method gives more accurate results than a first-order perturbation method and a comparable accuracy as the Monte Carlo simulation in a lower computational time. However, as the frequency content of the loading becomes random, or for general random process loadings, the method loses its accuracy and computational efficiency. Issues in implementation, limitations, and further challenges are also addressed.
Resumo:
Stochastic modelling is a useful way of simulating complex hard-rock aquifers as hydrological properties (permeability, porosity etc.) can be described using random variables with known statistics. However, very few studies have assessed the influence of topological uncertainty (i.e. the variability of thickness of conductive zones in the aquifer), probably because it is not easy to retrieve accurate statistics of the aquifer geometry, especially in hard rock context. In this paper, we assessed the potential of using geophysical surveys to describe the geometry of a hard rock-aquifer in a stochastic modelling framework. The study site was a small experimental watershed in South India, where the aquifer consisted of a clayey to loamy-sandy zone (regolith) underlain by a conductive fissured rock layer (protolith) and the unweathered gneiss (bedrock) at the bottom. The spatial variability of the thickness of the regolith and fissured layers was estimated by electrical resistivity tomography (ERT) profiles, which were performed along a few cross sections in the watershed. For stochastic analysis using Monte Carlo simulation, the generated random layer thickness was made conditional to the available data from the geophysics. In order to simulate steady state flow in the irregular domain with variable geometry, we used an isoparametric finite element method to discretize the flow equation over an unstructured grid with irregular hexahedral elements. The results indicated that the spatial variability of the layer thickness had a significant effect on reducing the simulated effective steady seepage flux and that using the conditional simulations reduced the uncertainty of the simulated seepage flux. As a conclusion, combining information on the aquifer geometry obtained from geophysical surveys with stochastic modelling is a promising methodology to improve the simulation of groundwater flow in complex hard-rock aquifers. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Gene expression in living systems is inherently stochastic, and tends to produce varying numbers of proteins over repeated cycles of transcription and translation. In this paper, an expression is derived for the steady-state protein number distribution starting from a two-stage kinetic model of the gene expression process involving p proteins and r mRNAs. The derivation is based on an exact path integral evaluation of the joint distribution, P(p, r, t), of p and r at time t, which can be expressed in terms of the coupled Langevin equations for p and r that represent the two-stage model in continuum form. The steady-state distribution of p alone, P(p), is obtained from P(p, r, t) (a bivariate Gaussian) by integrating out the r degrees of freedom and taking the limit t -> infinity. P(p) is found to be proportional to the product of a Gaussian and a complementary error function. It provides a generally satisfactory fit to simulation data on the same two-stage process when the translational efficiency (a measure of intrinsic noise levels in the system) is relatively low; it is less successful as a model of the data when the translational efficiency (and noise levels) are high.