963 resultados para Monte-Carlo Simulation Method
Resumo:
Consider a network of unreliable links, modelling for example a communication network. Estimating the reliability of the network-expressed as the probability that certain nodes in the network are connected-is a computationally difficult task. In this paper we study how the Cross-Entropy method can be used to obtain more efficient network reliability estimation procedures. Three techniques of estimation are considered: Crude Monte Carlo and the more sophisticated Permutation Monte Carlo and Merge Process. We show that the Cross-Entropy method yields a speed-up over all three techniques.
Resumo:
2000 Mathematics Subject Classification: primary: 60J80, 60J85, secondary: 62M09, 92D40
Resumo:
Recent technological developments have made it possible to design various microdevices where fluid flow and heat transfer are involved. For the proper design of such systems, the governing physics needs to be investigated. Due to the difficulty to study complex geometries in micro scales using experimental techniques, computational tools are developed to analyze and simulate flow and heat transfer in microgeometries. However, conventional numerical methods using the Navier-Stokes equations fail to predict some aspects of microflows such as nonlinear pressure distribution, increase mass flow rate, slip flow and temperature jump at the solid boundaries. This necessitates the development of new computational methods which depend on the kinetic theory that are both accurate and computationally efficient. In this study, lattice Boltzmann method (LBM) was used to investigate the flow and heat transfer in micro sized geometries. The LBM depends on the Boltzmann equation which is valid in the whole rarefaction regime that can be observed in micro flows. Results were obtained for isothermal channel flows at Knudsen numbers higher than 0.01 at different pressure ratios. LBM solutions for micro-Couette and micro-Poiseuille flow were found to be in good agreement with the analytical solutions valid in the slip flow regime (0.01 < Kn < 0.1) and direct simulation Monte Carlo solutions that are valid in the transition regime (0.1 < Kn < 10) for pressure distribution and velocity field. The isothermal LBM was further extended to simulate flows including heat transfer. The method was first validated for continuum channel flows with and without constrictions by comparing the thermal LBM results against accurate solutions obtained from analytical equations and finite element method. Finally, the capability of thermal LBM was improved by adding the effect of rarefaction and the method was used to analyze the behavior of gas flow in microchannels. The major finding of this research is that, the newly developed particle-based method described here can be used as an alternative numerical tool in order to study non-continuum effects observed in micro-electro-mechanical-systems (MEMS).
Resumo:
Recent technological developments have made it possible to design various microdevices where fluid flow and heat transfer are involved. For the proper design of such systems, the governing physics needs to be investigated. Due to the difficulty to study complex geometries in micro scales using experimental techniques, computational tools are developed to analyze and simulate flow and heat transfer in microgeometries. However, conventional numerical methods using the Navier-Stokes equations fail to predict some aspects of microflows such as nonlinear pressure distribution, increase mass flow rate, slip flow and temperature jump at the solid boundaries. This necessitates the development of new computational methods which depend on the kinetic theory that are both accurate and computationally efficient. In this study, lattice Boltzmann method (LBM) was used to investigate the flow and heat transfer in micro sized geometries. The LBM depends on the Boltzmann equation which is valid in the whole rarefaction regime that can be observed in micro flows. Results were obtained for isothermal channel flows at Knudsen numbers higher than 0.01 at different pressure ratios. LBM solutions for micro-Couette and micro-Poiseuille flow were found to be in good agreement with the analytical solutions valid in the slip flow regime (0.01 < Kn < 0.1) and direct simulation Monte Carlo solutions that are valid in the transition regime (0.1 < Kn < 10) for pressure distribution and velocity field. The isothermal LBM was further extended to simulate flows including heat transfer. The method was first validated for continuum channel flows with and without constrictions by comparing the thermal LBM results against accurate solutions obtained from analytical equations and finite element method. Finally, the capability of thermal LBM was improved by adding the effect of rarefaction and the method was used to analyze the behavior of gas flow in microchannels. The major finding of this research is that, the newly developed particle-based method described here can be used as an alternative numerical tool in order to study non-continuum effects observed in micro-electro-mechanical-systems (MEMS).
Resumo:
The protein folding problem has been one of the most challenging subjects in biological physics due to its complexity. Energy landscape theory based on statistical mechanics provides a thermodynamic interpretation of the protein folding process. We have been working to answer fundamental questions about protein-protein and protein-water interactions, which are very important for describing the energy landscape surface of proteins correctly. At first, we present a new method for computing protein-protein interaction potentials of solvated proteins directly from SAXS data. An ensemble of proteins was modeled by Metropolis Monte Carlo and Molecular Dynamics simulations, and the global X-ray scattering of the whole model ensemble was computed at each snapshot of the simulation. The interaction potential model was optimized and iterated by a Levenberg-Marquardt algorithm. Secondly, we report that terahertz spectroscopy directly probes hydration dynamics around proteins and determines the size of the dynamical hydration shell. We also present the sequence and pH-dependence of the hydration shell and the effect of the hydrophobicity. On the other hand, kinetic terahertz absorption (KITA) spectroscopy is introduced to study the refolding kinetics of ubiquitin and its mutants. KITA results are compared to small angle X-ray scattering, tryptophan fluorescence, and circular dichroism results. We propose that KITA monitors the rearrangement of hydrogen bonding during secondary structure formation. Finally, we present development of the automated single molecule operating system (ASMOS) for a high throughput single molecule detector, which levitates a single protein molecule in a 10 µm diameter droplet by the laser guidance. I also have performed supporting calculations and simulations with my own program codes.
Resumo:
A primary goal of this dissertation is to understand the links between mathematical models that describe crystal surfaces at three fundamental length scales: The scale of individual atoms, the scale of collections of atoms forming crystal defects, and macroscopic scale. Characterizing connections between different classes of models is a critical task for gaining insight into the physics they describe, a long-standing objective in applied analysis, and also highly relevant in engineering applications. The key concept I use in each problem addressed in this thesis is coarse graining, which is a strategy for connecting fine representations or models with coarser representations. Often this idea is invoked to reduce a large discrete system to an appropriate continuum description, e.g. individual particles are represented by a continuous density. While there is no general theory of coarse graining, one closely related mathematical approach is asymptotic analysis, i.e. the description of limiting behavior as some parameter becomes very large or very small. In the case of crystalline solids, it is natural to consider cases where the number of particles is large or where the lattice spacing is small. Limits such as these often make explicit the nature of links between models capturing different scales, and, once established, provide a means of improving our understanding, or the models themselves. Finding appropriate variables whose limits illustrate the important connections between models is no easy task, however. This is one area where computer simulation is extremely helpful, as it allows us to see the results of complex dynamics and gather clues regarding the roles of different physical quantities. On the other hand, connections between models enable the development of novel multiscale computational schemes, so understanding can assist computation and vice versa. Some of these ideas are demonstrated in this thesis. The important outcomes of this thesis include: (1) a systematic derivation of the step-flow model of Burton, Cabrera, and Frank, with corrections, from an atomistic solid-on-solid-type models in 1+1 dimensions; (2) the inclusion of an atomistically motivated transport mechanism in an island dynamics model allowing for a more detailed account of mound evolution; and (3) the development of a hybrid discrete-continuum scheme for simulating the relaxation of a faceted crystal mound. Central to all of these modeling and simulation efforts is the presence of steps composed of individual layers of atoms on vicinal crystal surfaces. Consequently, a recurring theme in this research is the observation that mesoscale defects play a crucial role in crystal morphological evolution.
Resumo:
Dissertação de Mestrado em Finanças Empresariais
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Química
Resumo:
Extreme value theory (EVT) deals with the occurrence of extreme phenomena. The tail index is a very important parameter appearing in the estimation of the probability of rare events. Under a semiparametric framework, inference requires the choice of a number k of upper order statistics to be considered. This is the crux of the matter and there is no definite formula to do it, since a small k leads to high variance and large values of k tend to increase the bias. Several methodologies have emerged in literature, specially concerning the most popular Hill estimator (Hill, 1975). In this work we compare through simulation well-known procedures presented in Drees and Kaufmann (1998), Matthys and Beirlant (2000), Beirlant et al. (2002) and de Sousa and Michailidis (2004), with a heuristic scheme considered in Frahm et al. (2005) within the estimation of a different tail measure but with a similar context. We will see that the new method may be an interesting alternative.
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
BACKGROUND: Lipid-lowering therapy is costly but effective at reducing coronary heart disease (CHD) risk. OBJECTIVE: To assess the cost-effectiveness and public health impact of Adult Treatment Panel III (ATP III) guidelines and compare with a range of risk- and age-based alternative strategies. DESIGN: The CHD Policy Model, a Markov-type cost-effectiveness model. DATA SOURCES: National surveys (1999 to 2004), vital statistics (2000), the Framingham Heart Study (1948 to 2000), other published data, and a direct survey of statin costs (2008). TARGET POPULATION: U.S. population age 35 to 85 years. Time Horizon: 2010 to 2040. PERSPECTIVE: Health care system. INTERVENTION: Lowering of low-density lipoprotein cholesterol with HMG-CoA reductase inhibitors (statins). OUTCOME MEASURE: Incremental cost-effectiveness. RESULTS OF BASE-CASE ANALYSIS: Full adherence to ATP III primary prevention guidelines would require starting (9.7 million) or intensifying (1.4 million) statin therapy for 11.1 million adults and would prevent 20,000 myocardial infarctions and 10,000 CHD deaths per year at an annual net cost of $3.6 billion ($42,000/QALY) if low-intensity statins cost $2.11 per pill. The ATP III guidelines would be preferred over alternative strategies if society is willing to pay $50,000/QALY and statins cost $1.54 to $2.21 per pill. At higher statin costs, ATP III is not cost-effective; at lower costs, more liberal statin-prescribing strategies would be preferred; and at costs less than $0.10 per pill, treating all persons with low-density lipoprotein cholesterol levels greater than 3.4 mmol/L (>130 mg/dL) would yield net cost savings. RESULTS OF SENSITIVITY ANALYSIS: Results are sensitive to the assumptions that LDL cholesterol becomes less important as a risk factor with increasing age and that little disutility results from taking a pill every day. LIMITATION: Randomized trial evidence for statin effectiveness is not available for all subgroups. CONCLUSION: The ATP III guidelines are relatively cost-effective and would have a large public health impact if implemented fully in the United States. Alternate strategies may be preferred, however, depending on the cost of statins and how much society is willing to pay for better health outcomes. FUNDING: Flight Attendants' Medical Research Institute and the Swanson Family Fund. The Framingham Heart Study and Framingham Offspring Study are conducted and supported by the National Heart, Lung, and Blood Institute.
Resumo:
There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit cointegration. Time-varying parameter VARs (TVP-VARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVP-VARs when allowing for cointegration. Instead we develop a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for inflation.
Resumo:
Spatio-temporal clusters in 1997?2003 fire sequences of Tuscany region (central Italy) have been identified and analysed by using the scan statistic, a method which was devised to evidence clusters in epidemiology. Results showed that the method is reliable to find clusters of events and to evaluate their significance via Monte Carlo replication. The evaluation of the presence of spatial and temporal patterns in fire occurrence and their significance could have a great impact in forthcoming studies on fire occurrences prediction.