37 resultados para Monte Carle Simulation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A simulation scheme is proposed for determining the excess chemical potential of a substance in solution. First, a Monte Carlo simulation is performed with classical models for solute and solvent molecules. A representative sample of these configurations is then used in a hybrid quantum/classical (QM/MM) calculation, where the solute is treated quantum-mechanically, and the average electronic structure is used to construct an improved classical model. This procedure is iterated to self-consistency in the classical model, which in practice is attained in one or two steps, depending on the quality of the initial guess. The excess free energy of the molecule within the QM/MM approach is determined relative to the classical model using thermodynamic perturbation theory with a cumulant expansion. The procedure provides a method of constructing classical point charge models appropriate for the solution and gives a measure of the importance of solvent fluctuations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

: Static calculation and preliminary kinetic Monte Carlo simulation studies are undertaken for the nucleation and growth on a model system which follows a Frank-van der Merwe mechanism. In the present case, we consider the deposition of Ag on Au(100) and Au(111) surfaces. The interactions were calculated using the embedded atom model. The kinetics of formation and growth of 2D Ag structures on Au(100) and Au(111) is investigated and the influence of surface steps on this phenomenon is studied. Very different time scales are predicted for Ag diffusion on Au(100) and Au(111), thus rendering very different regimes for the nucleation and growth of the related 2D phases. These observations are drawn from the application of a model free of any adjustable parameter.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The characterization of thermocouple sensors for temperature measurement in variable flow environments is a challenging problem. In this paper, novel difference equation-based algorithms are presented that allow in situ characterization of temperature measurement probes consisting of two-thermocouple sensors with differing time constants. Linear and non-linear least squares formulations of the characterization problem are introduced and compared in terms of their computational complexity, robustness to noise and statistical properties. With the aid of this analysis, least squares optimization procedures that yield unbiased estimates are identified. The main contribution of the paper is the development of a linear two-parameter generalized total least squares formulation of the sensor characterization problem. Monte-Carlo simulation results are used to support the analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Monte Carlo simulation of chemical ordering kinetics in nano-layered L10 AB binary intermetallics was performed. The study addressed FePt thin layers considered as a material for ultra-high-density magnetic storage media and revealed metastability of the L10 c-variant superstructure with monoatomic planes parallel to the surface and off-plane easy magnetization. The layers, originally perfectly ordered in a c-variant of the L10 superstructure, showed homogeneous disordering running in parallel with a spontaneous re-orientation of the monoatomic planes leading to a mosaic microstructure composed of a- and b-L10-variant domains. The domains nucleated heterogeneously on the surface of the layer and grew discontinuously inwards its volume. Finally, the domains relaxed towards an equilibrium microstructure of the system. Two “atomistic-scale” processes: (i) homogeneous disordering and (ii) nucleation of the a- and b-L10-variant domains showed characteristic time scales. The same was observed for the domain microstructure relaxation. The discontinuous domain growth showed no definite driving force and proceeded due to thermal fluctuations. The above complex structural evolution has recently been observed experimentally in epitaxially deposited thin films of FePt.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper arose from the work carried out for the Cullen/Uff Joint Inquiry into Train Protection Systems. It is concerned with the problem of evaluating the benefits of safety enhancements in order to avoid rare, but catastrophic accidents, and the role of Operations Research in the process. The problems include both input values and representation of outcomes. A key input is the value of life. This paper briefly discusses why the value of life might vary from incident to incident and reviews alternative estimates before producing a 'best estimate' for rail. When the occurrence of an event is uncertain, the normal method is to apply a single 'expected' value. This paper argues that a more effective method of representing such situations is through Monte-Carlo simulation and demonstrates the use of the methodology on a case study of the decision as to whether or not advanced train protection (ATP) should have been installed on a route to the west of London. This paper suggests that the output is more informative than traditional cost-benefit appraisals or engineering event tree approaches. It also shows that, unlike the results from utilizing the traditional approach, the value of ATP on this route would be positive over 50% of the time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rejoining kinetics of double-stranded DNA fragments, along with measurements of residual damage after postirradiation incubation, are often used as indicators of the biological relevance of the damage induced by ionizing radiation of different qualities. Although it is widely accepted that high-LET radiation-induced double-strand breaks (DSBs) tend to rejoin with kinetics slower than low-LET radiation-induced DSBs, possibly due to the complexity of the DSB itself, the nature of a slowly rejoining DSB-containing DNA lesion remains unknown. Using an approach that combines pulsed-field gel electrophoresis (PFGE) of fragmented DNA from human skin fibroblasts and a recently developed Monte Carlo simulation of radiation-induced DNA breakage and rejoining kinetics, we have tested the role of DSB-containing DNA lesions in the 8-kbp-5.7-Mbp fragment size range in determining the DSB rejoining kinetics. It is found that with low-LET X rays or high LET alpha particles, DSB rejoining kinetics data obtained with PFGE can be computer-simulated assuming that DSB rejoining kinetics does not depend on spacing of breaks along the chromosomes. After analysis of DNA fragmentation profiles, the rejoining kinetics of X-ray-induced DSBs could be fitted by two components: a fast component with a half-life of 0.9 +/- 0.5 h and a slow component with a half-life of 16 +/- 9 h. For a particles, a fast component with a half-life of 0.7 +/- 0.4 h and a slow component with a half-life of 12 5 h along with a residual fraction of unrepaired breaks accounting for 8% of the initial damage were observed. In summary, it is shown that genomic proximity of breaks along a chromosome does not determine the rejoining kinetics, so the slowly rejoining breaks induced with higher frequencies after exposure to high-LET radiation (0.37 +/- 0.12) relative to low-LET radiation (0.22 +/- 0.07) can be explained on the basis of lesion complexity at the nanometer scale, known as locally multiply damaged sites. (c) 2005 by Radiation Research Society.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reliable prediction of long-term medical device performance using computer simulation requires consideration of variability in surgical procedure, as well as patient-specific factors. However, even deterministic simulation of long-term failure processes for such devices is time and resource consuming so that including variability can lead to excessive time to achieve useful predictions. This study investigates the use of an accelerated probabilistic framework for predicting the likely performance envelope of a device and applies it to femoral prosthesis loosening in cemented hip arthroplasty.
A creep and fatigue damage failure model for bone cement, in conjunction with an interfacial fatigue model for the implant–cement interface, was used to simulate loosening of a prosthesis within a cement mantle. A deterministic set of trial simulations was used to account for variability of a set of surgical and patient factors, and a response surface method was used to perform and accelerate a Monte Carlo simulation to achieve an estimate of the likely range of prosthesis loosening. The proposed framework was used to conceptually investigate the influence of prosthesis selection and surgical placement on prosthesis migration.
Results demonstrate that the response surface method is capable of dramatically reducing the time to achieve convergence in mean and variance of predicted response variables. A critical requirement for realistic predictions is the size and quality of the initial training dataset used to generate the response surface and further work is required to determine the recommendations for a minimum number of initial trials. Results of this conceptual application predicted that loosening was sensitive to the implant size and femoral width. Furthermore, different rankings of implant performance were predicted when only individual simulations (e.g. an average condition) were used to rank implants, compared with when stochastic simulations were used. In conclusion, the proposed framework provides a viable approach to predicting realistic ranges of loosening behaviour for orthopaedic implants in reduced timeframes compared with conventional Monte Carlo simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important issue in risk analysis is the distinction between epistemic and aleatory uncertainties. In this paper, the use of distinct representation formats for aleatory and epistemic uncertainties is advocated, the latter being modelled by sets of possible values. Modern uncertainty theories based on convex sets of probabilities are known to be instrumental for hybrid representations where aleatory and epistemic components of uncertainty remain distinct. Simple uncertainty representation techniques based on fuzzy intervals and p-boxes are used in practice. This paper outlines a risk analysis methodology from elicitation of knowledge about parameters to decision. It proposes an elicitation methodology where the chosen representation format depends on the nature and the amount of available information. Uncertainty propagation methods then blend Monte Carlo simulation and interval analysis techniques. Nevertheless, results provided by these techniques, often in terms of probability intervals, may be too complex to interpret for a decision-maker and we, therefore, propose to compute a unique indicator of the likelihood of risk, called confidence index. It explicitly accounts for the decisionmaker’s attitude in the face of ambiguity. This step takes place at the end of the risk analysis process, when no further collection of evidence is possible that might reduce the ambiguity due to epistemic uncertainty. This last feature stands in contrast with the Bayesian methodology, where epistemic uncertainties on input parameters are modelled by single subjective probabilities at the beginning of the risk analysis process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Radiocarbon-dated sediment cores from six lakes in the Ahklun Mountains, south-western Alaska, were used to interpolate the ages of late Quaternary tephra beds ranging in age from 25.4 to 0.4ka. The lakes are located downwind of the Aleutian Arc and Alaska Peninsula volcanoes in the northern Bristol Bay area between 159° and 161°W at around 60°N. Sedimentation-rate age models for each lake were based on a published spline-fit procedure that uses Monte Carlo simulation to determine age model uncertainty. In all, 62 C ages were used to construct the six age models, including 23 ages presented here for the first time. The age model from Lone Spruce Pond is based on 18 ages, and is currently the best-resolved Holocene age model available from the region, with an average 2s age uncertainty of about±109 years over the past 14.5ka. The sedimentary sequence from Lone Spruce Pond contains seven tephra beds, more than previously found in any other lake in the area. Of the 26 radiocarbon-dated tephra beds at the six lakes and from a soil pit, seven are correlated between two or more sites based on their ages. The major-element geochemistry of glass shards from most of these tephra beds supports the age-based correlations. The remaining tephra beds appear to be present at only one site based on their unique geochemistry or age. The 5.8ka tephra is similar to the widespread Aniakchak tephra [3.7±0.2 (1s) ka], but can be distinguished conclusively based on its trace-element geochemistry. The 3.1 and 0.4ka tephras have glass major- and trace-element geochemical compositions indistinguishable from prominent Aniakchak tephra, and might represent redeposited beds. Only two tephra beds are found in all lakes: the Aniakchak tephra (3.7±0.2ka) and Tephra B (6.1±0.3ka). The tephra beds can be used as chronostratigraphic markers for other sedimentary sequences in the region, including cores from Cascade and Sunday lakes, which were previously undated and were analyzed in this study to correlate with the new regional tephrostratigraphy. © 2012 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A two-thermocouple sensor characterization method for use in variable flow applications is proposed. Previous offline methods for constant velocity flow are extended using sliding data windows and polynomials to accommodate variable velocity. Analysis of Monte-Carlo simulation studies confirms that the unbiased and consistent parameter estimator outperforms alternatives in the literature and has the added advantage of not requiring a priori knowledge of the time constant ratio of thermocouples. Experimental results from a test rig are also presented. © 2008 The Institute of Measurement and Control.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a low-complexity closed-loop spatial multiplexing method with limited feedback over multi-input-multi-output (MIMO) fading channels. The transmit adaptation is simply performed by selecting transmit antennas (or substreams) by comparing their signal-to-noise ratios to a given threshold with a fixed nonadaptive constellation and fixed transmit power per substream. We analyze the performance of the proposed system by deriving closed-form expressions for spectral efficiency, average transmit power, and bit error rate (BER). Depending on practical system design constraints, the threshold is chosen to maximize the spectral efficiency (or minimize the average BER) subject to average transmit power and average BER (or spectral efficiency) constraints, respectively. We present numerical and Monte Carlo simulation results that validate our analysis. Compared to open-loop spatial multiplexing and other approaches that select the best antenna subset in spatial multiplexing, the numerical results illustrate that the proposed technique obtains significant power gains for the same BER and spectral efficiency. We also provide numerical results that show improvement over rate-adaptive orthogonal space-time block coding, which requires highly complex constellation adaptation. We analyze the impact of feedback delay using analytical and Monte Carlo approaches. The proposed approach is arguably the simplest possible adaptive spatial multiplexing system from an implementation point of view. However, our approach and analysis can be extended to other systems using multiple constellations and power levels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work investigates the end-to-end performance of randomized distributed space-time codes with complex Gaussian distribution, when employed in a wireless relay network. The relaying nodes are assumed to adopt a decode-and-forward strategy and transmissions are affected by small and large scale fading phenomena. Extremely tight, analytical approximations of the end-to-end symbol error probability and of the end-to-end outage probability are derived and successfully validated through Monte-Carlo simulation. For the high signal-to-noise ratio regime, a simple, closed-form expression for the symbol error probability is further provided.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accretion disk winds are thought to produce many of the characteristic features seen in the spectra of active galactic nuclei (AGNs) and quasi-stellar objects (QSOs). These outflows also represent a natural form of feedback between the central supermassive black hole and its host galaxy. The mechanism for driving this mass loss remains unknown, although radiation pressure mediated by spectral lines is a leading candidate. Here, we calculate the ionization state of, and emergent spectra for, the hydrodynamic simulation of a line-driven disk wind previously presented by Proga & Kallman. To achieve this, we carry out a comprehensive Monte Carlo simulation of the radiative transfer through, and energy exchange within, the predicted outflow. We find that the wind is much more ionized than originally estimated. This is in part because it is much more difficult to shield any wind regions effectively when the outflow itself is allowed to reprocess and redirect ionizing photons. As a result, the calculated spectrum that would be observed from this particular outflow solution would not contain the ultraviolet spectral lines that are observed in many AGN/QSOs. Furthermore, the wind is so highly ionized that line driving would not actually be efficient. This does not necessarily mean that line-driven winds are not viable. However, our work does illustrate that in order to arrive at a self-consistent model of line-driven disk winds in AGN/QSO, it will be critical to include a more detailed treatment of radiative transfer and ionization in the next generation of hydrodynamic simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional experimental economics methods often consume enormous resources of qualified human participants, and the inconsistence of a participant’s decisions among repeated trials prevents investigation from sensitivity analyses. The problem can be solved if computer agents are capable of generating similar behaviors as the given participants in experiments. An experimental economics based analysis method is presented to extract deep information from questionnaire data and emulate any number of participants. Taking the customers’ willingness to purchase electric vehicles (EVs) as an example, multi-layer correlation information is extracted from a limited number of questionnaires. Multi-agents mimicking the inquired potential customers are modelled through matching the probabilistic distributions of their willingness embedded in the questionnaires. The authenticity of both the model and the algorithm is validated by comparing the agent-based Monte Carlo simulation results with the questionnaire-based deduction results. With the aid of agent models, the effects of minority agents with specific preferences on the results are also discussed.