981 resultados para Méthodes de Monte Carlo par chaîne de Markov


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent claim that the exit probability (EP) of a slightly modified version of the Sznadj model is a continuous function of the initial magnetization is questioned. This result has been obtained analytically and confirmed by Monte Carlo simulations, simultaneously and independently by two different groups (EPL, 82 (2008) 18006; 18007). It stands at odds with an earlier result which yielded a step function for the EP (Europhys. Lett., 70 (2005) 705). The dispute is investigated by proving that the continuous shape of the EP is a direct outcome of a mean-field treatment for the analytical result. As such, it is most likely to be caused by finite-size effects in the simulations. The improbable alternative would be a signature of the irrelevance of fluctuations in this system. Indeed, evidence is provided in support of the stepwise shape as going beyond the mean-field level. These findings yield new insight in the physics of one-dimensional systems with respect to the validity of a true equilibrium state when using solely local update rules. The suitability and the significance to perform numerical simulations in those cases is discussed. To conclude, a great deal of caution is required when applying updates rules to describe any system especially social systems. Copyright (C) EPLA, 2011

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the protein folding problem, solvent-mediated forces are commonly represented by intra-chain pairwise contact energy. Although this approximation has proven to be useful in several circumstances, it is limited in some other aspects of the problem. Here we show that it is possible to achieve two models to represent the chain-solvent system. one of them with implicit and other with explicit solvent, such that both reproduce the same thermodynamic results. Firstly, lattice models treated by analytical methods, were used to show that the implicit and explicitly representation of solvent effects can be energetically equivalent only if local solvent properties are time and spatially invariant. Following, applying the same reasoning Used for the lattice models, two inter-consistent Monte Carlo off-lattice models for implicit and explicit solvent are constructed, being that now in the latter the solvent properties are allowed to fluctuate. Then, it is shown that the chain configurational evolution as well as the globule equilibrium conformation are significantly distinct for implicit and explicit solvent systems. Actually, strongly contrasting with the implicit solvent version, the explicit solvent model predicts: (i) a malleable globule, in agreement with the estimated large protein-volume fluctuations; (ii) thermal conformational stability, resembling the conformational hear resistance of globular proteins, in which radii of gyration are practically insensitive to thermal effects over a relatively wide range of temperatures; and (iii) smaller radii of gyration at higher temperatures, indicating that the chain conformational entropy in the unfolded state is significantly smaller than that estimated from random coil configurations. Finally, we comment on the meaning of these results with respect to the understanding of the folding process. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stalker (AIAA Paper 87-0403) has suggested that, by ejecting molecules directly upstream from the entire face of a satellite, it is possible to reduce the drag on a satellite in low-Earth orbit and hence maintain orbit with a total fuel mass (for forward ejection and conventional reaction rockets) less than the typical mass requirements of conventional rockets. An analytical analysis is presented here, as well as Monte Carlo simulations. These indicate that to reduce the overall drag on the satellite significantly, collisions between the freestream and ejected molecules must occur at least two satellite diameters upstream. This can be achieved if the molecules are ejected far upstream from the satellite’s surface through a sting that projects forward from the satellite. Using some estimates of what would be feasible sting arrangements, we find that the drag on the satellite can be reduced to such an extent that the satellite’s orbit can be maintained with a total fuel mass of less than 60% of that required for reaction rockets alone. Upstream ejection is effective in reducing the drag for freestream Knudsen numbers less than approximately 250, but not otherwise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Market-based transmission expansion planning gives information to investors on where is the most cost efficient place to invest and brings benefits to those who invest in this grid. However, both market issue and power system adequacy problems are system planers’ concern. In this paper, a hybrid probabilistic criterion of Expected Economical Loss (EEL) is proposed as an index to evaluate the systems’ overall expected economical losses during system operation in a competitive market. It stands on both investors’ and planner’s point of view and will further improves the traditional reliability cost. By applying EEL, it is possible for system planners to obtain a clear idea regarding the transmission network’s bottleneck and the amount of losses arises from this weak point. Sequentially, it enables planners to assess the worth of providing reliable services. Also, the EEL will contain valuable information for moneymen to undertake their investment. This index could truly reflect the random behaviors of power systems and uncertainties from electricity market. The performance of the EEL index is enhanced by applying Normalized Coefficient of Probability (NCP), so it can be utilized in large real power systems. A numerical example is carried out on IEEE Reliability Test System (RTS), which will show how the EEL can predict the current system bottleneck under future operational conditions and how to use EEL as one of planning objectives to determine future optimal plans. A well-known simulation method, Monte Carlo simulation, is employed to achieve the probabilistic characteristic of electricity market and Genetic Algorithms (GAs) is used as a multi-objective optimization tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Direct Simulation Monte Carlo (DSMC) method is used to simulate the flow of rarefied gases. In the Macroscopic Chemistry Method (MCM) for DSMC, chemical reaction rates calculated from local macroscopic flow properties are enforced in each cell. Unlike the standard total collision energy (TCE) chemistry model for DSMC, the new method is not restricted to an Arrhenius form of the reaction rate coefficient, nor is it restricted to a collision cross-section which yields a simple power-law viscosity. For reaction rates of interest in aerospace applications, chemically reacting collisions are generally infrequent events and, as such, local equilibrium conditions are established before a significant number of chemical reactions occur. Hence, the reaction rates which have been used in MCM have been calculated from the reaction rate data which are expected to be correct only for conditions of thermal equilibrium. Here we consider artificially high reaction rates so that the fraction of reacting collisions is not small and propose a simple method of estimating the rates of chemical reactions which can be used in the Macroscopic Chemistry Method in both equilibrium and non-equilibrium conditions. Two tests are presented: (1) The dissociation rates under conditions of thermal non-equilibrium are determined from a zero-dimensional Monte-Carlo sampling procedure which simulates ‘intra-modal’ non-equilibrium; that is, equilibrium distributions in each of the translational, rotational and vibrational modes but with different temperatures for each mode; (2) The 2-D hypersonic flow of molecular oxygen over a vertical plate at Mach 30 is calculated. In both cases the new method produces results in close agreement with those given by the standard TCE model in the same highly nonequilibrium conditions. We conclude that the general method of estimating the non-equilibrium reaction rate is a simple means by which information contained within non-equilibrium distribution functions predicted by the DSMC method can be included in the Macroscopic Chemistry Method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adsorption of binary hydrocarbon mixtures involving methane in carbon slit pores is theoretically studied here from the viewpoints of separation and of the effect of impurities on methane storage. It is seen that even small amounts of ethane, propane, or butane can significantly reduce the methane capacity of carbons. Optimal pore sizes and pressures, depending on impurity concentration, are noted in the present work, suggesting that careful adsorbent and process design can lead to enhanced separation. These results are consistent with earlier literature studies for the infinite dilution limit. For methane storage applications a carbon micropore width of 11.4 Angstrom (based on distance between centers of carbon atoms on opposing walls) is found to be the most suitable from the point of view of lower impurity uptake during high-pressure adsorption and greater impurity retention during low-pressure delivery. The results also theoretically confirm unusual recently reported observations of enhanced methane adsorption in the presence of a small amount of heavier hydrocarbon impurity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I shall discuss the quantum and classical dynamics of a class of nonlinear Hamiltonian systems. The discussion will be restricted to systems with one degree of freedom. Such systems cannot exhibit chaos, unless the Hamiltonians are time dependent. Thus we shall consider systems with a potential function that has a higher than quadratic dependence on the position and, furthermore, we shall allow the potential function to be a periodic function of time. This is the simplest class of Hamiltonian system that can exhibit chaotic dynamics. I shall show how such systems can be realized in atom optics, where very cord atoms interact with optical dipole potentials of a far-off resonance laser. Such systems are ideal for quantum chaos studies as (i) the energy of the atom is small and action scales are of the order of Planck's constant, (ii) the systems are almost perfectly isolated from the decohering effects of the environment and (iii) optical methods enable exquisite time dependent control of the mechanical potentials seen by the atoms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a method of estimating HIV incidence rates in epidemic situations from data on age-specific prevalence and changes in the overall prevalence over time. The method is applied to women attending antenatal clinics in Hlabisa, a rural district of KwaZulu/Natal, South Africa, where transmission of HIV is overwhelmingly through heterosexual contact. A model which gives age-specific prevalence rates in the presence of a progressing epidemic is fitted to prevalence data for 1998 using maximum likelihood methods and used to derive the age-specific incidence. Error estimates are obtained using a Monte Carlo procedure. Although the method is quite general some simplifying assumptions are made concerning the form of the risk function and sensitivity analyses are performed to explore the importance of these assumptions. The analysis shows that in 1998 the annual incidence of infection per susceptible woman increased from 5.4 per cent (3.3-8.5 per cent; here and elsewhere ranges give 95 per cent confidence limits) at age 15 years to 24.5 per cent (20.6-29.1 per cent) at age 22 years and declined to 1.3 per cent (0.5-2.9 per cent) at age 50 years; standardized to a uniform age distribution, the overall incidence per susceptible woman aged 15 to 59 was 11.4 per cent (10.0-13.1 per cent); per women in the population it was 8.4 per cent (7.3-9.5 per cent). Standardized to the age distribution of the female population the average incidence per woman was 9.6 per cent (8.4-11.0 per cent); standardized to the age distribution of women attending antenatal clinics, it was 11.3 per cent (9.8-13.3 per cent). The estimated incidence depends on the values used for the epidemic growth rate and the AIDS related mortality. To ensure that, for this population, errors in these two parameters change the age specific estimates of the annual incidence by less than the standard deviation of the estimates of the age specific incidence, the AIDS related mortality should be known to within +/-50 per cent and the epidemic growth rate to within +/-25 per cent, both of which conditions are met. In the absence of cohort studies to measure the incidence of HIV infection directly, useful estimates of the age-specific incidence can be obtained from cross-sectional, age-specific prevalence data and repeat cross-sectional data on the overall prevalence of HIV infection. Several assumptions were made because of the lack of data but sensitivity analyses show that they are unlikely to affect the overall estimates significantly. These estimates are important in assessing the magnitude of the public health problem, for designing vaccine trials and for evaluating the impact of interventions. Copyright (C) 2001 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article deals with the efficiency of fractional integration parameter estimators. This study was based on Monte Carlo experiments involving simulated stochastic processes with integration orders in the range]-1,1[. The evaluated estimation methods were classified into two groups: heuristics and semiparametric/maximum likelihood (ML). The study revealed that the comparative efficiency of the estimators, measured by the lesser mean squared error, depends on the stationary/non-stationary and persistency/anti-persistency conditions of the series. The ML estimator was shown to be superior for stationary persistent processes; the wavelet spectrum-based estimators were better for non-stationary mean reversible and invertible anti-persistent processes; the weighted periodogram-based estimator was shown to be superior for non-invertible anti-persistent processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The Assessing Cost-Effectiveness - Mental Health (ACE-MH) study aims to assess from a health sector perspective, whether there are options for change that could improve the effectiveness and efficiency of Australia's current mental health services by directing available resources toward 'best practice' cost-effective services. Method: The use of standardized evaluation methods addresses the reservations expressed by many economists about the simplistic use of League Tables based on economic studies confounded by differences in methods, context and setting. The cost-effectiveness ratio for each intervention is calculated using economic and epidemiological data. This includes systematic reviews and randomised controlled trials for efficacy, the Australian Surveys of Mental Health and Wellbeing for current practice and a combination of trials and longitudinal studies for adherence. The cost-effectiveness ratios are presented as cost (A$) per disability-adjusted life year (DALY) saved with a 95% uncertainty interval based on Monte Carlo simulation modelling. An assessment of interventions on 'second filter' criteria ('equity', 'strength of evidence', 'feasibility' and 'acceptability to stakeholders') allows broader concepts of 'benefit' to be taken into account, as well as factors that might influence policy judgements in addition to cost-effectiveness ratios. Conclusions: The main limitation of the study is in the translation of the effect size from trials into a change in the DALY disability weight, which required the use of newly developed methods. While comparisons within disorders are valid, comparisons across disorders should be made with caution. A series of articles is planned to present the results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a new nonlocal density functional theory characterization procedure, the finite wall thickness model, for nanoporous carbons, whereby heterogeneity of pore size and pore walls in the carbon is probed simultaneously. We determine the pore size distributions and pore wall thickness distributions of several commercial activated carbons and coal chars, with good correspondence with X-ray diffraction. It is shown that the conventional infinite wall thickness approach overestimates the pore size slightly. Pore-pore correlation has been shown to have a negligible effect on prediction of pore size and pore wall thickness distributions for small molecules such as argon used in characterization. By utilizing the structural parameters (pore size and pore wall thickness distribution) in the generalized adsorption isotherm (GAI) we are able to predict adsorption uptake of supercritical gases in BPL and Norit RI Extra carbons, in excellent agreement with experimental adsorption uptake data up to 60 MPa. The method offers a useful technique for probing features of the solid skeleton, hitherto studied by crystallographic methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a method for estimating the posterior probability density of the cointegrating rank of a multivariate error correction model. A second contribution is the careful elicitation of the prior for the cointegrating vectors derived from a prior on the cointegrating space. This prior obtains naturally from treating the cointegrating space as the parameter of interest in inference and overcomes problems previously encountered in Bayesian cointegration analysis. Using this new prior and Laplace approximation, an estimator for the posterior probability of the rank is given. The approach performs well compared with information criteria in Monte Carlo experiments. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper use consider the problem of providing standard errors of the component means in normal mixture models fitted to univariate or multivariate data by maximum likelihood via the EM algorithm. Two methods of estimation of the standard errors are considered: the standard information-based method and the computationally-intensive bootstrap method. They are compared empirically by their application to three real data sets and by a small-scale Monte Carlo experiment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SETTING: Chronic obstructive pulmonary disease (COPD) is the third leading cause of death among adults in Brazil. OBJECTIVE: To evaluate the mortality and hospitalisation trends in Brazil caused by COPD during the period 1996-2008. DESIGN: We used the health official statistics system to obtain data about mortality (1996-2008) and morbidity (1998-2008) due to COPD and all respiratory diseases (tuberculosis: codes A15-16; lung cancer: code C34, and all diseases coded from J40 to 47 in the 10th Revision of the International Classification of Diseases) as the underlying cause, in persons aged 45-74 years. We used the Joinpoint Regression Program log-linear model using Poisson regression that creates a Monte Carlo permutation test to identify points where trend lines change significantly in magnitude/direction to verify peaks and trends. RESULTS: The annual per cent change in age-adjusted death rates due to COPD declined by 2.7% in men (95%CI -3.6 to -1.8) and -2.0% (95%CI -2.9 to -1.0) in women; and due to all respiratory causes it declined by -1.7% (95%CI 2.4 to -1.0) in men and -1.1% (95%CI -1.8 to -0.3) in women. Although hospitalisation rates for COPD are declining, the hospital admission fatality rate increased in both sexes. CONCLUSION: COPD is still a leading cause of mortality in Brazil despite the observed decline in the mortality/hospitalisation rates for both sexes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radiation dose calculations in nuclear medicine depend on quantification of activity via planar and/or tomographic imaging methods. However, both methods have inherent limitations, and the accuracy of activity estimates varies with object size, background levels, and other variables. The goal of this study was to evaluate the limitations of quantitative imaging with planar and single photon emission computed tomography (SPECT) approaches, with a focus on activity quantification for use in calculating absorbed dose estimates for normal organs and tumors. To do this we studied a series of phantoms of varying complexity of geometry, with three radionuclides whose decay schemes varied from simple to complex. Four aqueous concentrations of (99m)Tc, (131)I, and (111)In (74, 185, 370, and 740 kBq mL(-1)) were placed in spheres of four different sizes in a water-filled phantom, with three different levels of activity in the surrounding water. Planar and SPECT images of the phantoms were obtained on a modern SPECT/computed tomography (CT) system. These radionuclides and concentration/background studies were repeated using a cardiac phantom and a modified torso phantom with liver and ""tumor"" regions containing the radionuclide concentrations and with the same varying background levels. Planar quantification was performed using the geometric mean approach, with attenuation correction (AC), and with and without scatter corrections (SC and NSC). SPECT images were reconstructed using attenuation maps (AM) for AC; scatter windows were used to perform SC during image reconstruction. For spherical sources with corrected data, good accuracy was observed (generally within +/- 10% of known values) for the largest sphere (11.5 mL) and for both planar and SPECT methods with (99m)Tc and (131)I, but were poorest and deviated from known values for smaller objects, most notably for (111)In. SPECT quantification was affected by the partial volume effect in smaller objects and generally showed larger errors than the planar results in these cases for all radionuclides. For the cardiac phantom, results were the most accurate of all of the experiments for all radionuclides. Background subtraction was an important factor influencing these results. The contribution of scattered photons was important in quantification with (131)I; if scatter was not accounted for, activity tended to be overestimated using planar quantification methods. For the torso phantom experiments, results show a clear underestimation of activity when compared to previous experiment with spherical sources for all radionuclides. Despite some variations that were observed as the level of background increased, the SPECT results were more consistent across different activity concentrations. Planar or SPECT quantification on state-of-the-art gamma cameras with appropriate quantitative processing can provide accuracies of better than 10% for large objects and modest target-to-background concentrations; however when smaller objects are used, in the presence of higher background, and for nuclides with more complex decay schemes, SPECT quantification methods generally produce better results. Health Phys. 99(5):688-701; 2010