983 resultados para Monte-Carlo Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo Simulation procedure.Program summaryTitle of program: STATFLUXCatalogue identifier: ADYS_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneComputer for which the program is designed and others on which it has been tested: Micro-computer with Intel Pentium III, 3.0 GHzInstallation: Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, BrazilOperating system: Windows 2000 and Windows XPProgramming language used: Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program.Memory, required to execute with typical data: 8 Mbytes of RAM memory and 100 MB of Hard disk memoryNo. of bits in a word: 16No. of lines in distributed program, including test data, etc.: 6912No. of bytes in distributed Program, including test data, etc.: 229 541Distribution format: tar.gzNature of the physical problem: the investigation of transport mechanisms for radioactive substances, through environmental pathways, is very important for radiological protection of populations. One such pathway, associated with the food chain, is the grass-animal-man sequence. The distribution of trace elements in humans and laboratory animals has been intensively studied over the past 60 years [R.C. Pendlenton, C.W. Mays, R.D. Lloyd, A.L. Brooks, Differential accumulation of iodine-131 from local fallout in people and milk, Health Phys. 9 (1963) 1253-1262]. In addition, investigations on the incidence of cancer in humans, and a possible causal relationship to radioactive fallout, have been undertaken [E.S. Weiss, M.L. Rallison, W.T. London, W.T. Carlyle Thompson, Thyroid nodularity in southwestern Utah school children exposed to fallout radiation, Amer. J. Public Health 61 (1971) 241-249; M.L. Rallison, B.M. Dobyns, F.R. Keating, J.E. Rall, F.H. Tyler, Thyroid diseases in children, Amer. J. Med. 56 (1974) 457-463; J.L. Lyon, M.R. Klauber, J.W. Gardner, K.S. Udall, Childhood leukemia associated with fallout from nuclear testing, N. Engl. J. Med. 300 (1979) 397-402]. From the pathways of entry of radionuclides in the human (or animal) body, ingestion is the most important because it is closely related to life-long alimentary (or dietary) habits. Those radionuclides which are able to enter the living cells by either metabolic or other processes give rise to localized doses which can be very high. The evaluation of these internally localized doses is of paramount importance for the assessment of radiobiological risks and radiological protection. The time behavior of trace concentration in organs is the principal input for prediction of internal doses after acute or chronic exposure. The General Multiple-Compartment Model (GMCM) is the powerful and more accepted method for biokinetical studies, which allows the calculation of concentration of trace elements in organs as a function of time, when the flow parameters of the model are known. However, few biokinetics data exist in the literature, and the determination of flow and transfer parameters by statistical fitting for each system is an open problem.Restriction on the complexity of the problem: This version of the code works with the constant volume approximation, which is valid for many situations where the biological half-live of a trace is lower than the volume rise time. Another restriction is related to the central flux model. The model considered in the code assumes that exist one central compartment (e.g., blood), that connect the flow with all compartments, and the flow between other compartments is not included.Typical running time: Depends on the choice for calculations. Using the Derivative Method the time is very short (a few minutes) for any number of compartments considered. When the Gauss-Marquardt iterative method is used the calculation time can be approximately 5-6 hours when similar to 15 compartments are considered. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new method to quantify substructures in clusters of galaxies, based on the analysis of the intensity of structures. This analysis is done in a residual image that is the result of the subtraction of a surface brightness model, obtained by fitting a two-dimensional analytical model (beta-model or Sersic profile) with elliptical symmetry, from the X-ray image. Our method is applied to 34 clusters observed by the Chandra Space Telescope that are in the redshift range z is an element of [0.02, 0.2] and have a signal-to-noise ratio (S/N) greater than 100. We present the calibration of the method and the relations between the substructure level with physical quantities, such as the mass, X-ray luminosity, temperature, and cluster redshift. We use our method to separate the clusters in two sub-samples of high-and low-substructure levels. We conclude, using Monte Carlo simulations, that the method recuperates very well the true amount of substructure for small angular core radii clusters (with respect to the whole image size) and good S/N observations. We find no evidence of correlation between the substructure level and physical properties of the clusters such as gas temperature, X-ray luminosity, and redshift; however, analysis suggest a trend between the substructure level and cluster mass. The scaling relations for the two sub-samples (high-and low-substructure level clusters) are different (they present an offset, i. e., given a fixed mass or temperature, low-substructure clusters tend to be more X-ray luminous), which is an important result for cosmological tests using the mass-luminosity relation to obtain the cluster mass function, since they rely on the assumption that clusters do not present different scaling relations according to their dynamical state.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a continuous search for theoretical methods that are able to describe the effects of the liquid environment on molecular systems. Different methods emphasize different aspects, and the treatment of both the local and bulk properties is still a great challenge. In this work, the electronic properties of a water molecule in liquid environment is studied by performing a relaxation of the geometry and electronic distribution using the free energy gradient method. This is made using a series of steps in each of which we run a purely molecular mechanical (MM) Monte Carlo Metropolis simulation of liquid water and subsequently perform a quantum mechanical/molecular mechanical (QM/MM) calculation of the ensemble averages of the charge distribution, atomic forces, and second derivatives. The MP2/aug-cc-pV5Z level is used to describe the electronic properties of the QM water. B3LYP with specially designed basis functions are used for the magnetic properties. Very good agreement is found for the local properties of water, such as geometry, vibrational frequencies, dipole moment, dipole polarizability, chemical shift, and spin-spin coupling constants. The very good performance of the free energy method combined with a QM/MM approach along with the possible limitations are briefly discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context. Convergent point (CP) search methods are important tools for studying the kinematic properties of open clusters and young associations whose members share the same spatial motion. Aims. We present a new CP search strategy based on proper motion data. We test the new algorithm on synthetic data and compare it with previous versions of the CP search method. As an illustration and validation of the new method we also present an application to the Hyades open cluster and a comparison with independent results. Methods. The new algorithm rests on the idea of representing the stellar proper motions by great circles over the celestial sphere and visualizing their intersections as the CP of the moving group. The new strategy combines a maximum-likelihood analysis for simultaneously determining the CP and selecting the most likely group members and a minimization procedure that returns a refined CP position and its uncertainties. The method allows one to correct for internal motions within the group and takes into account that the stars in the group lie at different distances. Results. Based on Monte Carlo simulations, we find that the new CP search method in many cases returns a more precise solution than its previous versions. The new method is able to find and eliminate more field stars in the sample and is not biased towards distant stars. The CP solution for the Hyades open cluster is in excellent agreement with previous determinations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The lattice Boltzmann method is a popular approach for simulating hydrodynamic interactions in soft matter and complex fluids. The solvent is represented on a discrete lattice whose nodes are populated by particle distributions that propagate on the discrete links between the nodes and undergo local collisions. On large length and time scales, the microdynamics leads to a hydrodynamic flow field that satisfies the Navier-Stokes equation. In this thesis, several extensions to the lattice Boltzmann method are developed. In complex fluids, for example suspensions, Brownian motion of the solutes is of paramount importance. However, it can not be simulated with the original lattice Boltzmann method because the dynamics is completely deterministic. It is possible, though, to introduce thermal fluctuations in order to reproduce the equations of fluctuating hydrodynamics. In this work, a generalized lattice gas model is used to systematically derive the fluctuating lattice Boltzmann equation from statistical mechanics principles. The stochastic part of the dynamics is interpreted as a Monte Carlo process, which is then required to satisfy the condition of detailed balance. This leads to an expression for the thermal fluctuations which implies that it is essential to thermalize all degrees of freedom of the system, including the kinetic modes. The new formalism guarantees that the fluctuating lattice Boltzmann equation is simultaneously consistent with both fluctuating hydrodynamics and statistical mechanics. This establishes a foundation for future extensions, such as the treatment of multi-phase and thermal flows. An important range of applications for the lattice Boltzmann method is formed by microfluidics. Fostered by the "lab-on-a-chip" paradigm, there is an increasing need for computer simulations which are able to complement the achievements of theory and experiment. Microfluidic systems are characterized by a large surface-to-volume ratio and, therefore, boundary conditions are of special relevance. On the microscale, the standard no-slip boundary condition used in hydrodynamics has to be replaced by a slip boundary condition. In this work, a boundary condition for lattice Boltzmann is constructed that allows the slip length to be tuned by a single model parameter. Furthermore, a conceptually new approach for constructing boundary conditions is explored, where the reduced symmetry at the boundary is explicitly incorporated into the lattice model. The lattice Boltzmann method is systematically extended to the reduced symmetry model. In the case of a Poiseuille flow in a plane channel, it is shown that a special choice of the collision operator is required to reproduce the correct flow profile. This systematic approach sheds light on the consequences of the reduced symmetry at the boundary and leads to a deeper understanding of boundary conditions in the lattice Boltzmann method. This can help to develop improved boundary conditions that lead to more accurate simulation results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oggi sappiamo che la materia ordinaria rappresenta solo una piccola parte dell'intero contenuto in massa dell'Universo. L'ipotesi dell'esistenza della Materia Oscura, un nuovo tipo di materia che interagisce solo gravitazionalmente e, forse, tramite la forza debole, è stata avvalorata da numerose evidenze su scala sia galattica che cosmologica. Gli sforzi rivolti alla ricerca delle cosiddette WIMPs (Weakly Interacting Massive Particles), il generico nome dato alle particelle di Materia Oscura, si sono moltiplicati nel corso degli ultimi anni. L'esperimento XENON1T, attualmente in costruzione presso i Laboratori Nazionali del Gran Sasso (LNGS) e che sarà in presa dati entro la fine del 2015, segnerà un significativo passo in avanti nella ricerca diretta di Materia Oscura, che si basa sulla rivelazione di collisioni elastiche su nuclei bersaglio. XENON1T rappresenta la fase attuale del progetto XENON, che ha già realizzato gli esperimenti XENON10 (2005) e XENON100 (2008 e tuttora in funzione) e che prevede anche un ulteriore sviluppo, chiamato XENONnT. Il rivelatore XENON1T sfrutta circa 3 tonnellate di xeno liquido (LXe) e si basa su una Time Projection Chamber (TPC) a doppia fase. Dettagliate simulazioni Monte Carlo della geometria del rivelatore, assieme a specifiche misure della radioattività dei materiali e stime della purezza dello xeno utilizzato, hanno permesso di predire con accuratezza il fondo atteso. In questo lavoro di tesi, presentiamo lo studio della sensibilità attesa per XENON1T effettuato tramite il metodo statistico chiamato Profile Likelihood (PL) Ratio, il quale nell'ambito di un approccio frequentista permette un'appropriata trattazione delle incertezze sistematiche. In un primo momento è stata stimata la sensibilità usando il metodo semplificato Likelihood Ratio che non tiene conto di alcuna sistematica. In questo modo si è potuto valutare l'impatto della principale incertezza sistematica per XENON1T, ovvero quella sulla emissione di luce di scintillazione dello xeno per rinculi nucleari di bassa energia. I risultati conclusivi ottenuti con il metodo PL indicano che XENON1T sarà in grado di migliorare significativamente gli attuali limiti di esclusione di WIMPs; la massima sensibilità raggiunge una sezione d'urto σ=1.2∙10-47 cm2 per una massa di WIMP di 50 GeV/c2 e per una esposizione nominale di 2 tonnellate∙anno. I risultati ottenuti sono in linea con l'ambizioso obiettivo di XENON1T di abbassare gli attuali limiti sulla sezione d'urto, σ, delle WIMPs di due ordini di grandezza. Con tali prestazioni, e considerando 1 tonnellata di LXe come massa fiduciale, XENON1T sarà in grado di superare gli attuali limiti (esperimento LUX, 2013) dopo soli 5 giorni di acquisizione dati.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Signal proteins are able to adapt their response to a change in the environment, governing in this way a broad variety of important cellular processes in living systems. While conventional molecular-dynamics (MD) techniques can be used to explore the early signaling pathway of these protein systems at atomistic resolution, the high computational costs limit their usefulness for the elucidation of the multiscale transduction dynamics of most signaling processes, occurring on experimental timescales. To cope with the problem, we present in this paper a novel multiscale-modeling method, based on a combination of the kinetic Monte-Carlo- and MD-technique, and demonstrate its suitability for investigating the signaling behavior of the photoswitch light-oxygen-voltage-2-Jα domain from Avena Sativa (AsLOV2-Jα) and an AsLOV2-Jα-regulated photoactivable Rac1-GTPase (PA-Rac1), recently employed to control the motility of cancer cells through light stimulus. More specifically, we show that their signaling pathways begin with a residual re-arrangement and subsequent H-bond formation of amino acids near to the flavin-mononucleotide chromophore, causing a coupling between β-strands and subsequent detachment of a peripheral α-helix from the AsLOV2-domain. In the case of the PA-Rac1 system we find that this latter process induces the release of the AsLOV2-inhibitor from the switchII-activation site of the GTPase, enabling signal activation through effector-protein binding. These applications demonstrate that our approach reliably reproduces the signaling pathways of complex signal proteins, ranging from nanoseconds up to seconds at affordable computational costs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The new Spanish Regulation in Building Acoustic establishes values and limits for the different acoustic magnitudes whose fulfillment can be verify by means field measurements. In this sense, an essential aspect of a field measurement is to give the measured magnitude and the uncertainty associated to such a magnitude. In the calculus of the uncertainty it is very usual to follow the uncertainty propagation method as described in the Guide to the expression of Uncertainty in Measurements (GUM). Other option is the numerical calculus based on the distribution propagation method by means of Monte Carlo simulation. In fact, at this stage, it is possible to find several publications developing this last method by using different software programs. In the present work, we used Excel for the Monte Carlo simulation for the calculus of the uncertainty associated to the different magnitudes derived from the field measurements following ISO 140-4, 140-5 and 140-7. We compare the results with the ones obtained by the uncertainty propagation method. Although both methods give similar values, some small differences have been observed. Some arguments to explain such differences are the asymmetry of the probability distributions associated to the entry magnitudes,the overestimation of the uncertainty following the GUM

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and valuations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. Introduction Nowadays, any engineering calculation performed in the nuclear field should be accompanied by an uncertainty analysis. In such an analysis, different sources of uncertainties are taken into account. Works such as those performed under the UAM project (Ivanov, et al., 2013) treat nuclear data as a source of uncertainty, in particular cross-section data for which uncertainties given in the form of covariance matrices are already provided in the major nuclear data libraries. Meanwhile, fission yield uncertainties were often neglected or treated shallowly, because their effects were considered of second order compared to cross-sections (Garcia-Herranz, et al., 2010). However, the Working Party on International Nuclear Data Evaluation Co-operation (WPEC)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dominance measuring methods are a new approach to deal with complex decision-making problems with imprecise information. These methods are based on the computation of pairwise dominance values and exploit the information in the dominance matrix in dirent ways to derive measures of dominance intensity and rank the alternatives under consideration. In this paper we propose a new dominance measuring method to deal with ordinal information about decision-maker preferences in both weights and component utilities. It takes advantage of the centroid of the polytope delimited by ordinal information and builds triangular fuzzy numbers whose distances to the crisp value 0 constitute the basis for the de?nition of a dominance intensity measure. Monte Carlo simulation techniques have been used to compare the performance of this method with other existing approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an adaptation of the Cross-Entropy (CE) method to optimize fuzzy logic controllers. The CE is a recently developed optimization method based on a general Monte-Carlo approach to combinatorial and continuous multi-extremal optimization and importance sampling. This work shows the application of this optimization method to optimize the inputs gains, the location and size of the different membership functions' sets of each variable, as well as the weight of each rule from the rule's base of a fuzzy logic controller (FLC). The control system approach presented in this work was designed to command the orientation of an unmanned aerial vehicle (UAV) to modify its trajectory for avoiding collisions. An onboard looking forward camera was used to sense the environment of the UAV. The information extracted by the image processing algorithm is the only input of the fuzzy control approach to avoid the collision with a predefined object. Real tests with a quadrotor have been done to corroborate the improved behavior of the optimized controllers at different stages of the optimization process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a technique for equilibria characterization of activated carbon having slit-shaped pores. This method was first developed by Do (Do, D. D. A new method for the characterisation of micro-mesoporous materials. Presented at the International Symposium on New Trends in Colloid and Interface Science, September 24-26, 1998 Chiba, Japan) and applied by his group and other groups for characterization of pore size distribution (PSD) as well as adsorption equilibria determination of a wide range of hydrocarbons. It is refined in this paper and compared with the grand canonical Monte Carlo (GCMG) simulation and density functional theory (DFT). The refined theory results in a good agreement between the pore filling pressure versus pore width and those obtained by GCMG and DFT. Furthermore, our local isotherms are qualitatively in good agreement with those obtained by the GCMC simulations. The main advantage of this method is that it is about 4 orders of magnitude faster than the GCMC simulations, making it suitable for optimization studies and design purposes. Finally, we apply our method and the GCMG in the derivation of the PSD of a commercial activated carbon. It was found that the PSD derived from our method is comparable to that derived from the GCMG simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we apply a new method for the determination of surface area of carbonaceous materials, using the local surface excess isotherms obtained from the Grand Canonical Monte Carlo simulation and a concept of area distribution in terms of energy well-depth of solid–fluid interaction. The range of this well-depth considered in our GCMC simulation is from 10 to 100 K, which is wide enough to cover all carbon surfaces that we dealt with (for comparison, the well-depth for perfect graphite surface is about 58 K). Having the set of local surface excess isotherms and the differential area distribution, the overall adsorption isotherm can be obtained in an integral form. Thus, given the experimental data of nitrogen or argon adsorption on a carbon material, the differential area distribution can be obtained from the inversion process, using the regularization method. The total surface area is then obtained as the area of this distribution. We test this approach with a number of data in the literature, and compare our GCMC-surface area with that obtained from the classical BET method. In general, we find that the difference between these two surface areas is about 10%, indicating the need to reliably determine the surface area with a very consistent method. We, therefore, suggest the approach of this paper as an alternative to the BET method because of the long-recognized unrealistic assumptions used in the BET theory. Beside the surface area obtained by this method, it also provides information about the differential area distribution versus the well-depth. This information could be used as a microscopic finger-print of the carbon surface. It is expected that samples prepared from different precursors and different activation conditions will have distinct finger-prints. We illustrate this with Cabot BP120, 280 and 460 samples, and the differential area distributions obtained from the adsorption of argon at 77 K and nitrogen also at 77 K have exactly the same patterns, suggesting the characteristics of this carbon.