949 resultados para Monte Carle Simulation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work analysed the feasibility of using a fast, customized Monte Carlo (MC) method to perform accurate computation of dose distributions during pre- and intraplanning of intraoperative electron radiation therapy (IOERT) procedures. The MC method that was implemented, which has been integrated into a specific innovative simulation and planning tool, is able to simulate the fate of thousands of particles per second, and it was the aim of this work to determine the level of interactivity that could be achieved. The planning workflow enabled calibration of the imaging and treatment equipment, as well as manipulation of the surgical frame and insertion of the protection shields around the organs at risk and other beam modifiers. In this way, the multidisciplinary team involved in IOERT has all the tools necessary to perform complex MC dosage simulations adapted to their equipment in an efficient and transparent way. To assess the accuracy and reliability of this MC technique, dose distributions for a monoenergetic source were compared with those obtained using a general-purpose software package used widely in medical physics applications. Once accuracy of the underlying simulator was confirmed, a clinical accelerator was modelled and experimental measurements in water were conducted. A comparison was made with the output from the simulator to identify the conditions under which accurate dose estimations could be obtained in less than 3 min, which is the threshold imposed to allow for interactive use of the tool in treatment planning. Finally, a clinically relevant scenario, namely early-stage breast cancer treatment, was simulated with pre- and intraoperative volumes to verify that it was feasible to use the MC tool intraoperatively and to adjust dose delivery based on the simulation output, without compromising accuracy. The workflow provided a satisfactory model of the treatment head and the imaging system, enabling proper configuration of the treatment planning system and providing good accuracy in the dosage simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta tesis presentamos una teoría adaptada a la simulación de fenómenos lentos de transporte en sistemas atomísticos. En primer lugar, desarrollamos el marco teórico para modelizar colectividades estadísticas de equilibrio. A continuación, lo adaptamos para construir modelos de colectividades estadísticas fuera de equilibrio. Esta teoría reposa sobre los principios de la mecánica estadística, en particular el principio de máxima entropía de Jaynes, utilizado tanto para sistemas en equilibrio como fuera de equilibrio, y la teoría de las aproximaciones del campo medio. Expresamos matemáticamente el problema como un principio variacional en el que maximizamos una entropía libre, en lugar de una energía libre. La formulación propuesta permite definir equivalentes atomísticos de variables macroscópicas como la temperatura y la fracción molar. De esta forma podemos considerar campos macroscópicos no uniformes. Completamos el marco teórico con reglas de cuadratura de Monte Carlo, gracias a las cuales obtenemos modelos computables. A continuación, desarrollamos el conjunto completo de ecuaciones que gobiernan procesos de transporte. Deducimos la desigualdad de disipación entrópica a partir de fuerzas y flujos termodinámicos discretos. Esta desigualdad nos permite identificar la estructura que deben cumplir los potenciales cinéticos discretos. Dichos potenciales acoplan las tasas de variación en el tiempo de las variables microscópicas con las fuerzas correspondientes. Estos potenciales cinéticos deben ser completados con una relación fenomenológica, del tipo definido por la teoría de Onsanger. Por último, aportamos validaciones numéricas. Con ellas ilustramos la capacidad de la teoría presentada para simular propiedades de equilibrio y segregación superficial en aleaciones metálicas. Primero, simulamos propiedades termodinámicas de equilibrio en el sistema atomístico. A continuación evaluamos la habilidad del modelo para reproducir procesos de transporte en sistemas complejos que duran tiempos largos con respecto a los tiempos característicos a escala atómica. ABSTRACT In this work, we formulate a theory to address simulations of slow time transport effects in atomic systems. We first develop this theoretical framework in the context of equilibrium of atomic ensembles, based on statistical mechanics. We then adapt it to model ensembles away from equilibrium. The theory stands on Jaynes' maximum entropy principle, valid for the treatment of both, systems in equilibrium and away from equilibrium and on meanfield approximation theory. It is expressed in the entropy formulation as a variational principle. We interpret atomistic equivalents of macroscopic variables such as the temperature and the molar fractions, wich are not required to be uniform, but can vary from particle to particle. We complement this theory with Monte Carlo summation rules for further approximation. In addition, we provide a framework for studying transport processes with the full set of equations driving the evolution of the system. We first derive a dissipation inequality for the entropic production involving discrete thermodynamic forces and fluxes. This discrete dissipation inequality identifies the adequate structure for discrete kinetic potentials which couple the microscopic field rates to the corresponding driving forces. Those kinetic potentials must finally be expressed as a phenomenological rule of the Onsanger Type. We present several validation cases, illustrating equilibrium properties and surface segregation of metallic alloys. We first assess the ability of a simple meanfield model to reproduce thermodynamic equilibrium properties in systems with atomic resolution. Then, we evaluate the ability of the model to reproduce a long-term transport process in complex systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic importance weighting is proposed as a Monte Carlo method that has the capability to sample relevant parts of the configuration space even in the presence of many steep energy minima. The method relies on an additional dynamic variable (the importance weight) to help the system overcome steep barriers. A non-Metropolis theory is developed for the construction of such weighted samplers. Algorithms based on this method are designed for simulation and global optimization tasks arising from multimodal sampling, neural network training, and the traveling salesman problem. Numerical tests on these problems confirm the effectiveness of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cold climate anomaly about 8200 years ago is investigated with CLIMBER-2, a coupled atmosphere-ocean-biosphere model of intermediate complexity. This climate model simulates a cooling of about 3.6 K over the North Atlantic induced by a meltwater pulse from Lake Agassiz routed through the Hudson strait. The meltwater pulse is assumed to have a volume of 1.6 x 10^14 m^3 and a period of discharge of 2 years on the basis of glaciological modeling of the decay of the Laurentide Ice Sheet ( LIS). We present a possible mechanism which can explain the centennial duration of the 8.2 ka cold event. The mechanism is related to the existence of an additional equilibrium climate state with reduced North Atlantic Deep Water (NADW) formation and a southward shift of the NADW formation area. Hints at the additional climate state were obtained from the largely varying duration of the pulse-induced cold episode in response to overlaid random freshwater fluctuations in Monte Carlo simulations. The model equilibrium state was attained by releasing a weak multicentury freshwater flux through the St. Lawrence pathway completed by the meltwater pulse. The existence of such a climate mode appears essential for reproducing climate anomalies in close agreement with paleoclimatic reconstructions of the 8.2 ka event. The results furthermore suggest that the temporal evolution of the cold event was partly a matter of chance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new Monte Carlo algorithm is introduced for the simulation of supercooled liquids and glass formers, and tested in two model glasses. The algorithm thermalizes well below the Mode Coupling temperature and outperforms other optimized Monte Carlo methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genetic assignment methods use genotype likelihoods to draw inference about where individuals were or were not born, potentially allowing direct, real-time estimates of dispersal. We used simulated data sets to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F-0 immigrants in populations with ongoing gene flow, and hence for providing direct, real-time estimates of migration rates. The identification of accurate critical values required that resampling methods preserved the linkage disequilibrium deriving from recent generations of immigrants and reflected the sampling variance present in the data set being analysed. A novel Monte Carlo resampling method taking into account these aspects was proposed and its efficiency was evaluated. Power and error were relatively insensitive to the frequency assumed for missing alleles. Power to identify F-0 immigrants was improved by using large sample size (up to about 50 individuals) and by sampling all populations from which migrants may have originated. A combination of plotting genotype likelihoods and calculating mean genotype likelihood ratios (D-LR) appeared to be an effective way to predict whether F-0 immigrants could be identified for a particular pair of populations using a given set of markers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present new simulation results for the packing of single-center and three-center models of carbon dioxide at high pressure in carbon slit pores. The former shows a series of packing transitions that are well described by our density functional theory model developed earlier. In contrast, these transitions are absent for the three-center model. Analysis of the simulation results shows that alternations of flat-lying molecules and rotated molecules can occur as the pore width is increased. The presence or absence of quadrupoles has negligible effect on these high-density structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structural and dynamic properties of dioctadecyldimethylammoniums (DODDMA) intercalated into 2:1 layered clays are investigated using isothermal-isobaric (NPT) molecular dynamics (MD) simulation. The simulated results are in reasonably good agreement with the available experimental measurements, such as X-ray diffraction (XRD), atom force microscopy (AFM), Fourier transform infrared (FTIR), and nuclear magnetic resonance (NMR) spectroscopies. The nitrogen atoms are found to be located mainly within two layers close to the clay surface whereas methylene groups form a pseudoquadrilayer structure. The results of tilt angle and order parameter show that interior two-bond segments of alkyl chains prefer an arrangement parallel to the clay surface, whereas the segments toward end groups adopt a random orientation. In addition, the alkyl chains within the layer structure lie almost parallel to the clay surface whereas those out of the layer structure are essentially perpendicular to the surface. The trans conformations are predominant in all cases although extensive gauche conformations are observed, which is in agreement with previous simulations on n-butane. Moreover, an odd-even effect in conformation distributions is observed mainly along the chains close to the head and tail groups. The diffusion constants of both nitrogen atoms and methylene groups in these nanoconfined alkyl chains increase with the temperature and methelene position toward the tail groups.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim To develop an appropriate dosing strategy for continuous intravenous infusions (CII) of enoxaparin by minimizing the percentage of steady-state anti-Xa concentration (C-ss) outside the therapeutic range of 0.5-1.2 IU ml(-1). Methods A nonlinear mixed effects model was developed with NONMEM (R) for 48 adult patients who received CII of enoxaparin with infusion durations that ranged from 8 to 894 h at rates between 100 and 1600 IU h(-1). Three hundred and sixty-three anti-Xa concentration measurements were available from patients who received CII. These were combined with 309 anti-Xa concentrations from 35 patients who received subcutaneous enoxaparin. The effects of age, body size, height, sex, creatinine clearance (CrCL) and patient location [intensive care unit (ICU) or general medical unit] on pharmacokinetic (PK) parameters were evaluated. Monte Carlo simulations were used to (i) evaluate covariate effects on C-ss and (ii) compare the impact of different infusion rates on predicted C-ss. The best dose was selected based on the highest probability that the C-ss achieved would lie within the therapeutic range. Results A two-compartment linear model with additive and proportional residual error for general medical unit patients and only a proportional error for patients in ICU provided the best description of the data. Both CrCL and weight were found to affect significantly clearance and volume of distribution of the central compartment, respectively. Simulations suggested that the best doses for patients in the ICU setting were 50 IU kg(-1) per 12 h (4.2 IU kg(-1) h(-1)) if CrCL < 30 ml min(-1); 60 IU kg(-1) per 12 h (5.0 IU kg(-1) h(-1)) if CrCL was 30-50 ml min(-1); and 70 IU kg(-1) per 12 h (5.8 IU kg(-1) h(-1)) if CrCL > 50 ml min(-1). The best doses for patients in the general medical unit were 60 IU kg(-1) per 12 h (5.0 IU kg(-1) h(-1)) if CrCL < 30 ml min(-1); 70 IU kg(-1) per 12 h (5.8 IU kg(-1) h(-1)) if CrCL was 30-50 ml min(-1); and 100 IU kg(-1) per 12 h (8.3 IU kg(-1) h(-1)) if CrCL > 50 ml min(-1). These best doses were selected based on providing the lowest equal probability of either being above or below the therapeutic range and the highest probability that the C-ss achieved would lie within the therapeutic range. Conclusion The dose of enoxaparin should be individualized to the patients' renal function and weight. There is some evidence to support slightly lower doses of CII enoxaparin in patients in the ICU setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The estimation of P(S-n > u) by simulation, where S, is the sum of independent. identically distributed random varibles Y-1,..., Y-n, is of importance in many applications. We propose two simulation estimators based upon the identity P(S-n > u) = nP(S, > u, M-n = Y-n), where M-n = max(Y-1,..., Y-n). One estimator uses importance sampling (for Y-n only), and the other uses conditional Monte Carlo conditioning upon Y1,..., Yn-1. Properties of the relative error of the estimators are derived and a numerical study given in terms of the M/G/1 queue in which n is replaced by an independent geometric random variable N. The conclusion is that the new estimators compare extremely favorably with previous ones. In particular, the conditional Monte Carlo estimator is the first heavy-tailed example of an estimator with bounded relative error. Further improvements are obtained in the random-N case, by incorporating control variates and stratification techniques into the new estimation procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge of the adsorption behavior of coal-bed gases, mainly under supercritical high-pressure conditions, is important for optimum design of production processes to recover coal-bed methane and to sequester CO2 in coal-beds. Here, we compare the two most rigorous adsorption methods based on the statistical mechanics approach, which are Density Functional Theory (DFT) and Grand Canonical Monte Carlo (GCMC) simulation, for single and binary mixtures of methane and carbon dioxide in slit-shaped pores ranging from around 0.75 to 7.5 nm in width, for pressure up to 300 bar, and temperature range of 308-348 K, as a preliminary study for the CO2 sequestration problem. For single component adsorption, the isotherms generated by DFT, especially for CO2, do not match well with GCMC calculation, and simulation is subsequently pursued here to investigate the binary mixture adsorption. For binary adsorption, upon increase of pressure, the selectivity of carbon dioxide relative to methane in a binary mixture initially increases to a maximum value, and subsequently drops before attaining a constant value at pressures higher than 300 bar. While the selectivity increases with temperature in the initial pressure-sensitive region, the constant high-pressure value is also temperature independent. Optimum selectivity at any temperature is attained at a pressure of 90-100 bar at low bulk mole fraction of CO2, decreasing to approximately 35 bar at high bulk mole fractions. (c) 2005 American Institute of Chemical Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The following thesis describes the computer modelling of radio frequency capacitively coupled methane/hydrogen plasmas and the consequences for the reactive ion etching of (100) GaAs surfaces. In addition a range of etching experiments was undertaken over a matrix of pressure, power and methane concentration. The resulting surfaces were investigated using X-ray photoelectron spectroscopy and the results were discussed in terms of physical and chemical models of particle/surface interactions in addition to the predictions for energies, angles and relative fluxes to the substrate of the various plasma species. The model consisted of a Monte Carlo code which followed electrons and ions through the plasma and sheath potentials whilst taking account of collisions with background neutral gas molecules. The ionisation profile output from the electron module was used as input for the ionic module. Momentum scattering interactions of ions with gas molecules were investigated via different models and compared against results given by quantum mechanical code. The interactions were treated as central potential scattering events and the resulting neutral cascades were followed. The resulting predictions for ion energies at the cathode compared well to experimental ion energy distributions and this verified the particular form of the electrical potentials used and their applicability in the particular geometry plasma cell used in the etching experiments. The final code was used to investigate the effect of external plasma parameters on the mass distribution, energy and angles of all species impingent on the electrodes. Comparisons of electron energies in the plasma also agreed favourably with measurements made using a Langmuir electric probe. The surface analysis showed the surfaces all to be depleted in arsenic due to its preferential removal and the resultant Ga:As ratio in the surface was found to be directly linked to the etch rate. The etch rate was determined by the methane flux which was predicted by the code.