970 resultados para simulations Monte Carlo
Resumo:
The purpose of this work was to study and quantify the differences in dose distributions computed with some of the newest dose calculation algorithms available in commercial planning systems. The study was done for clinical cases originally calculated with pencil beam convolution (PBC) where large density inhomogeneities were present. Three other dose algorithms were used: a pencil beam like algorithm, the anisotropic analytic algorithm (AAA), a convolution superposition algorithm, collapsed cone convolution (CCC), and a Monte Carlo program, voxel Monte Carlo (VMC++). The dose calculation algorithms were compared under static field irradiations at 6 MV and 15 MV using multileaf collimators and hard wedges where necessary. Five clinical cases were studied: three lung and two breast cases. We found that, in terms of accuracy, the CCC algorithm performed better overall than AAA compared to VMC++, but AAA remains an attractive option for routine use in the clinic due to its short computation times. Dose differences between the different algorithms and VMC++ for the median value of the planning target volume (PTV) were typically 0.4% (range: 0.0 to 1.4%) in the lung and -1.3% (range: -2.1 to -0.6%) in the breast for the few cases we analysed. As expected, PTV coverage and dose homogeneity turned out to be more critical in the lung than in the breast cases with respect to the accuracy of the dose calculation. This was observed in the dose volume histograms obtained from the Monte Carlo simulations.
Resumo:
Monte Carlo simulations arrive at their results by introducing randomness, sometimes derived from a physical randomizing device. Nonetheless, we argue, they open no new epistemic channels beyond that already employed by traditional simulations: the inference by ordinary argumentation of conclusions from assumptions built into the simulations. We show that Monte Carlo simulations cannot produce knowledge other than by inference, and that they resemble other computer simulations in the manner in which they derive their conclusions. Simple examples of Monte Carlo simulations are analysed to identify the underlying inferences.
Resumo:
Monte Carlo simulation is a powerful method in many natural and social sciences. But what sort of method is it? And where does its power come from? Are Monte Carlo simulations experiments, theories or something else? The aim of this talk is to answer these questions and to explain the power of Monte Carlo simulations. I provide a classification of Monte Carlo techniques and defend the claim that Monte Carlo simulation is a sort of inference.
Resumo:
Ion beam therapy is a valuable method for the treatment of deep-seated and radio-resistant tumors thanks to the favorable depth-dose distribution characterized by the Bragg peak. Hadrontherapy facilities take advantage of the specific ion range, resulting in a highly conformal dose in the target volume, while the dose in critical organs is reduced as compared to photon therapy. The necessity to monitor the delivery precision, i.e. the ion range, is unquestionable, thus different approaches have been investigated, such as the detection of prompt photons or annihilation photons of positron emitter nuclei created during the therapeutic treatment. Based on the measurement of the induced β+ activity, our group has developed various in-beam PET prototypes: the one under test is composed by two planar detector heads, each one consisting of four modules with a total active area of 10 × 10 cm2. A single detector module is made of a LYSO crystal matrix coupled to a position sensitive photomultiplier and is read-out by dedicated frontend electronics. A preliminary data taking was performed at the Italian National Centre for Oncological Hadron Therapy (CNAO, Pavia), using proton beams in the energy range of 93–112 MeV impinging on a plastic phantom. The measured activity profiles are presented and compared with the simulated ones based on the Monte Carlo FLUKA package.
Resumo:
We review the main results from extensive Monte Carlo (MC) simulations on athermal polymer packings in the bulk and under confinement. By employing the simplest possible model of excluded volume, macromolecules are represented as freely-jointed chains of hard spheres of uniform size. Simulations are carried out in a wide concentration range: from very dilute up to very high volume fractions, reaching the maximally random jammed (MRJ) state. We study how factors like chain length, volume fraction and flexibility of bond lengths affect the structure, shape and size of polymers, their packing efficiency and their phase behaviour (disorder–order transition). In addition, we observe how these properties are affected by confinement realized by flat, impenetrable walls in one dimension. Finally, by mapping the parent polymer chains to primitive paths through direct geometrical algorithms, we analyse the characteristics of the entanglement network as a function of packing density.
Resumo:
A Monte Carlo simulation method for globular proteins, called extended-scaled-collective-variable (ESCV) Monte Carlo, is proposed. This method combines two Monte Carlo algorithms known as entropy-sampling and scaled-collective-variable algorithms. Entropy-sampling Monte Carlo is able to sample a large configurational space even in a disordered system that has a large number of potential barriers. In contrast, scaled-collective-variable Monte Carlo provides an efficient sampling for a system whose dynamics is highly cooperative. Because a globular protein is a disordered system whose dynamics is characterized by collective motions, a combination of these two algorithms could provide an optimal Monte Carlo simulation for a globular protein. As a test case, we have carried out an ESCV Monte Carlo simulation for a cell adhesive Arg-Gly-Asp-containing peptide, Lys-Arg-Cys-Arg-Gly-Asp-Cys-Met-Asp, and determined the conformational distribution at 300 K. The peptide contains a disulfide bridge between the two cysteine residues. This bond mimics the strong geometrical constraints that result from a protein's globular nature and give rise to highly cooperative dynamics. Computation results show that the ESCV Monte Carlo was not trapped at any local minimum and that the canonical distribution was correctly determined.
Resumo:
We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.
Resumo:
Mitarai [Phys. Fluids 17, 047101 (2005)] compared turbulent combustion models against homogeneous direct numerical simulations with extinction/recognition phenomena. The recently suggested multiple mapping conditioning (MMC) was not considered and is simulated here for the same case with favorable results. Implementation issues crucial for successful MMC simulations are also discussed.
Resumo:
Aim: To identify an appropriate dosage strategy for patients receiving enoxaparin by continuous intravenous infusion (CII). Methods: Monte Carlo simulations were performed in NONMEM, (200 replicates of 1000 patients) to predict steady state anti-Xa concentrations (Css) for patients receiving a CII of enoxaparin. The covariate distribution model was simulated based on covariate demographics in the CII study population. The impact of patient weight, renal function (creatinine clearance (CrCL)) and patient location (intensive care unit (ICU)) were evaluated. A population pharmacokinetic model was used as the input-output model (1-compartment first order output model with mixed residual error structure). Success of a dosing regimen was based on the percent of Css that is between the therapeutic range of 0.5 IU/ml to 1.2 IU/ml. Results: The best dose for patients in the ICU was 4.2IU/kg/h (success mean 64.8% and 90% prediction interval (PI): 60.1–69.8%) if CrCL60ml/min, the best dose was 8.3IU/kg/h (success mean 65.4%, 90% PI: 58.5–73.2%). Simulations suggest that there was a 50% improvement in the success of the CII if the dose rate for ICU patients with CrCL
Resumo:
The structure and dynamics of methane in hydrated potassium montmorillonite clay have been studied under conditions encountered in sedimentary basin and compared to those of hydrated sodium montmorillonite clay using computer simulation techniques. The simulated systems contain two molecular layers of water and followed gradients of 150 barkm-1 and 30 Kkm-1 up to a maximum burial depth of 6 km. Methane particle is coordinated to about 19 oxygen atoms, with 6 of these coming from the clay surface oxygen. Potassium ions tend to move away from the center towards the clay surface, in contrast to the behavior observed with the hydrated sodium form. The clay surface affinity for methane was found to be higher in the hydrated K-form. Methane diffusion in the two-layer hydrated K-montmorillonite increases from 0.39×10-9 m2s-1 at 280 K to 3.27×10-9 m2s-1 at 460 K compared to 0.36×10-9 m2s-1 at 280 K to 4.26×10-9 m2s-1 at 460 K in Na-montmorillonite hydrate. The distributions of the potassium ions were found to vary in the hydrates when compared to those of sodium form. Water molecules were also found to be very mobile in the potassium clay hydrates compared to sodium clay hydrates. © 2004 Elsevier Inc. All All rights reserved.
Resumo:
Mathematics Subject Classification: 65C05, 60G50, 39A10, 92C37