943 resultados para Monte-Carlo simulation, Rod-coil block copolymer, Tetrapod polymer mixture
Resumo:
OBJETIVO: Avaliar a dose absorvida em folículos tireoidianos devido aos elétrons de baixa energia, como os elétrons Auger e os de conversão interna, além das partículas beta, para os radioisótopos de iodo (131I, 132I, 133I, 134I e 135I) usando o método Monte Carlo. MATERIAIS E MÉTODOS: O cálculo da dose foi feito ao nível folicular, simulando elétrons Auger, conversão interna e partículas beta, com o código MCNP4C. Os folículos (colóide e células foliculares) foram modelados como esferas, com diâmetros do colóide variando de 30 a 500 mm. A densidade considerada para os folículos foi a da água (1,0 g.cm-³). RESULTADOS: Considerando partículas de baixa energia, o percentual de contribuição do 131I na dose total absorvida pelo colóide é de aproximadamente 25%, enquanto os isótopos de meia-vida física curta apresentaram contribuição de 75%. Para as células foliculares, esse percentual é ainda maior, chegando a 87% para os iodos de meia-vida curta e 13% para o 131I. CONCLUSÃO: Com base nos resultados obtidos, pode-se mostrar a importância de se considerar partículas de baixa energia na contribuição para a dose total absorvida ao nível folicular (colóide e células foliculares) devido aos radioisótopos de iodo (131I, 132I, 133I, 134I e 135I).
Resumo:
OBJETIVO: Determinar, por simulação Monte Carlo, os espectros de feixes de cobaltoterapia em profundidade na água e fatores de correção para doses absorvidas em dosímetros termoluminescentes de fluoreto de lítio. MATERIAIS E MÉTODOS: As simulações dos espectros secundários da fonte clínica de cobalto-60 foram realizadas com o código Monte Carlo PENELOPE, em diversas profundidades na água. Medidas experimentais de dose profunda foram obtidas com dosímetros termoluminescentes e câmara de ionização em condições de referência em radioterapia. Os fatores de correção para os dosímetros termoluminescentes foram obtidos através da razão entre as absorções relativas ao espectro de baixa energia e ao espectro total. RESULTADOS: A análise espectral em profundidade revelou a existência de espectros secundários de baixa energia responsáveis por uma parcela significativa da deposição de dose. Foram observadas discrepâncias de 3,2% nas doses medidas experimentalmente com a câmara de ionização e com os dosímetros termoluminescentes. O uso dos fatores de correção nessas medidas permitiu diminuir a discrepância entre as doses absorvidas para, no máximo, 0,3%. CONCLUSÃO: Os espectros simulados permitem o cálculo de fatores de correção para as leituras de dosímetros termoluminescentes utilizados em medidas de dose profunda, contribuindo para a redução das incertezas associadas ao controle de qualidade de feixes clínicos em radioterapia.
Resumo:
Abstract Objective: Derive filtered tungsten X-ray spectra used in digital mammography systems by means of Monte Carlo simulations. Materials and Methods: Filtered spectra for rhodium filter were obtained for tube potentials between 26 and 32 kV. The half-value layer (HVL) of simulated filtered spectra were compared with those obtained experimentally with a solid state detector Unfors model 8202031-H Xi R/F & MAM Detector Platinum and 8201023-C Xi Base unit Platinum Plus w mAs in a Hologic Selenia Dimensions system using a direct radiography mode. Results: Calculated HVL values showed good agreement as compared with those obtained experimentally. The greatest relative difference between the Monte Carlo calculated HVL values and experimental HVL values was 4%. Conclusion: The results show that the filtered tungsten anode X-ray spectra and the EGSnrc Monte Carlo code can be used for mean glandular dose determination in mammography.
Resumo:
This article reports the phase behavior determi- nation of a system forming reverse liquid crystals and the formation of novel disperse systems in the two-phase region. The studied system is formed by water, cyclohexane, and Pluronic L-121, an amphiphilic block copolymer considered of special interest due to its aggregation and structural proper- ties. This system forms reverse cubic (I2) and reverse hexagonal (H2) phases at high polymer concentrations. These reverse phases are of particular interest since in the two-phase region, stable high internal phase reverse emulsions can be formed. The characterization of the I2 and H2 phases and of the derived gel emulsions was performed with small-angle X-ray scattering (SAXS) and rheometry, and the influence of temperature and water content was studied. TheH2 phase experimented a thermal transition to an I2 phase when temperature was increased, which presented an Fd3m structure. All samples showed a strong shear thinning behavior from low shear rates. The elasticmodulus (G0) in the I2 phase was around 1 order of magnitude higher than in theH2 phase. G0 was predominantly higher than the viscousmodulus (G00). In the gel emulsions,G0 was nearly frequency-independent, indicating their gel type nature. Contrarily to water-in-oil (W/O) normal emulsions, in W/I2 and W/H2 gel emulsions, G0, the complex viscosity (|η*|), and the yield stress (τ0) decreased with increasing water content, since the highly viscous microstructure of the con- tinuous phase was responsible for the high viscosity and elastic behavior of the emulsions, instead of the volumefraction of dispersed phase and droplet size. A rheological analysis, in which the cooperative flow theory, the soft glass rheology model, and the slip plane model were analyzed and compared, was performed to obtain one single model that could describe the non-Maxwellian behavior of both reverse phases and highly concentrated emulsions and to characterize their microstructure with the rheological properties.
Resumo:
We make several simulations using the Monte Carlo method in order to obtain the chemical equilibrium for several first-order reactions and one second-order reaction. We study several direct, reverse and consecutive reactions. These simulations show the fluctuations and relaxation time and help to understand the solution of the corresponding differential equations of chemical kinetics. This work was done in an undergraduate physical chemistry course at UNIFIEO.
Resumo:
The paper presents an introductory and general discussion on the quantum Monte Carlo methods, some fundamental algorithms, concepts and applicability. In order to introduce the quantum Monte Carlo method, preliminary concepts associated with Monte Carlo techniques are discussed.
Resumo:
Monte Carlo -reaktorifysiikkakoodit nykyisin käytettävissä olevilla laskentatehoilla tarjoavat mielenkiintoisen tavan reaktorifysiikan ongelmien ratkaisuun. Neljännen sukupolven ydinreaktoreissa käytettävät uudet rakenteet ja materiaalit ovat haasteellisia nykyisiin reaktoreihin suunnitelluille laskentaohjelmille. Tässä työssä Monte Carlo -reaktorifysiikkakoodi ja CFD-koodi yhdistetään kytkettyyn laskentaan kuulakekoreaktorissa, joka on yksi korkealämpötilareaktorityyppi. Työssä käytetty lähestymistapa on uutta maailmankin mittapuussa ajateltuna.
Resumo:
The purpose of this master thesis was to perform simulations that involve use of random number while testing hypotheses especially on two samples populations being compared weather by their means, variances or Sharpe ratios. Specifically, we simulated some well known distributions by Matlab and check out the accuracy of an hypothesis testing. Furthermore, we went deeper and check what could happen once the bootstrapping method as described by Effrons is applied on the simulated data. In addition to that, one well known RobustSharpe hypothesis testing stated in the paper of Ledoit and Wolf was applied to measure the statistical significance performance between two investment founds basing on testing weather there is a statistically significant difference between their Sharpe Ratios or not. We collected many literatures about our topic and perform by Matlab many simulated random numbers as possible to put out our purpose; As results we come out with a good understanding that testing are not always accurate; for instance while testing weather two normal distributed random vectors come from the same normal distribution. The Jacque-Berra test for normality showed that for the normal random vector r1 and r2, only 94,7% and 95,7% respectively are coming from normal distribution in contrast 5,3% and 4,3% failed to shown the truth already known; but when we introduce the bootstrapping methods by Effrons while estimating pvalues where the hypothesis decision is based, the accuracy of the test was 100% successful. From the above results the reports showed that bootstrapping methods while testing or estimating some statistics should always considered because at most cases the outcome are accurate and errors are minimized in the computation. Also the RobustSharpe test which is known to use one of the bootstrapping methods, studentised one, were applied first on different simulated data including distribution of many kind and different shape secondly, on real data, Hedge and Mutual funds. The test performed quite well to agree with the existence of statistical significance difference between their Sharpe ratios as described in the paper of Ledoit andWolf.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
All-electron partitioning of wave functions into products ^core^vai of core and valence parts in orbital space results in the loss of core-valence antisymmetry, uncorrelation of motion of core and valence electrons, and core-valence overlap. These effects are studied with the variational Monte Carlo method using appropriately designed wave functions for the first-row atoms and positive ions. It is shown that the loss of antisymmetry with respect to interchange of core and valence electrons is a dominant effect which increases rapidly through the row, while the effect of core-valence uncorrelation is generally smaller. Orthogonality of the core and valence parts partially substitutes the exclusion principle and is absolutely necessary for meaningful calculations with partitioned wave functions. Core-valence overlap may lead to nonsensical values of the total energy. It has been found that even relatively crude core-valence partitioned wave functions generally can estimate ionization potentials with better accuracy than that of the traditional, non-partitioned ones, provided that they achieve maximum separation (independence) of core and valence shells accompanied by high internal flexibility of ^core and Wvai- Our best core-valence partitioned wave function of that kind estimates the IP's with an accuracy comparable to the most accurate theoretical determinations in the literature.
Resumo:
We examined three different algorithms used in diffusion Monte Carlo (DMC) to study their precisions and accuracies in predicting properties of isolated atoms, which are H atom ground state, Be atom ground state and H atom first excited state. All three algorithms — basic DMC, minimal stochastic reconfiguration DMC, and pure DMC, each with future-walking, are successfully impletmented in ground state energy and simple moments calculations with satisfactory results. Pure diffusion Monte Carlo with future-walking algorithm is proven to be the simplest approach with the least variance. Polarizabilities for Be atom ground state and H atom first excited state are not satisfactorily estimated in the infinitesimal differentiation approach. Likewise, an approach using the finite field approximation with an unperturbed wavefunction for the latter system also fails. However, accurate estimations for the a-polarizabilities are obtained by using wavefunctions that come from the time-independent perturbation theory. This suggests the flaw in our approach to polarizability estimation for these difficult cases rests with our having assumed the trial function is unaffected by infinitesimal perturbations in the Hamiltonian.
Resumo:
Optimization of wave functions in quantum Monte Carlo is a difficult task because the statistical uncertainty inherent to the technique makes the absolute determination of the global minimum difficult. To optimize these wave functions we generate a large number of possible minima using many independently generated Monte Carlo ensembles and perform a conjugate gradient optimization. Then we construct histograms of the resulting nominally optimal parameter sets and "filter" them to identify which parameter sets "go together" to generate a local minimum. We follow with correlated-sampling verification runs to find the global minimum. We illustrate this technique for variance and variational energy optimization for a variety of wave functions for small systellls. For such optimized wave functions we calculate the variational energy and variance as well as various non-differential properties. The optimizations are either on par with or superior to determinations in the literature. Furthermore, we show that this technique is sufficiently robust that for molecules one may determine the optimal geometry at tIle same time as one optimizes the variational energy.