963 resultados para Monte-Carlo Simulation Method
Resumo:
We model nongraphitized carbon black surfaces and investigate adsorption of argon on these surfaces by using the grand canonical Monte Carlo simulation. In this model, the nongraphitized surface is modeled as a stack of graphene layers with some carbon atoms of the top graphene layer being randomly removed. The percentage of the surface carbon atoms being removed and the effective size of the defect ( created by the removal) are the key parameters to characterize the nongraphitized surface. The patterns of adsorption isotherm and isosteric heat are particularly studied, as a function of these surface parameters as well as pressure and temperature. It is shown that the adsorption isotherm shows a steplike behavior on a perfect graphite surface and becomes smoother on nongraphitized surfaces. Regarding the isosteric heat versus loading, we observe for the case of graphitized thermal carbon black the increase of heat in the submonolayer coverage and then a sharp decline in the heat when the second layer is starting to form, beyond which it increases slightly. On the other hand, the isosteric heat versus loading for a highly nongraphitized surface shows a general decline with respect to loading, which is due to the energetic heterogeneity of the surface. It is only when the fluid-fluid interaction is greater than the surface energetic factor that we see a minimum-maximum in the isosteric heat versus loading. These simulation results of isosteric heat agree well with the experimental results of graphitization of Spheron 6 (Polley, M. H.; Schaeffer, W. D.; Smith, W. R. J. Phys. Chem. 1953, 57, 469; Beebe, R. A.; Young, D. M. J. Phys. Chem. 1954, 58, 93). Adsorption isotherms and isosteric heat in pores whose walls have defects are also studied from the simulation, and the pattern of isotherm and isosteric heat could be used to identify the fingerprint of the surface.
Resumo:
Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a metachain to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely. [Bayesian phylogenetic inference; heating parameter; Markov chain Monte Carlo; replicated chains.]
Resumo:
GCMC simulations are applied to the adsorption of sub-critical methanol and ethanol on graphitized carbon black at 300 K. The carbon black was modelled both with and without carbonyl functional groups. Large differences are seen between the amounts adsorbed for different carbonyl configurations at low pressure prior to monolayer coverage. Once a monolayer has been formed on the carbon black, the adsorption behaviour is similar between the model surfaces with and without functional groups. Simulation isotherms for the case of low carbonyl concentrations or no carbonyls are qualitatively similar to the few experimental isotherms available in the literature for methanol and ethanol adsorption on highly graphitized carbon black. Isosteric heats and adsorbed phase heat capacities are shown to be very sensitive to carbonyl configurations. A maximum is observed in the adsorbed phase heat capacity of the alcohols for all simulations but is unrealistically high for the case of a plain graphite surface. The addition of carbonyls to the surface greatly reduces this maximum and approaches experimental data with carbonyl concentration as low as 0.09 carbonyls/nm(2).
Resumo:
The adsorption of Lennard-Jones fluids (argon and nitrogen) onto a graphitized thermal carbon black surface was studied with a Grand Canonical Monte Carlo Simulation (GCMC). The surface was assumed to be finite in length and composed of three graphene layers. When the GCMC simulation was used to describe adsorption on a graphite surface, an over-prediction of the isotherm was consistently observed in the pressure regions where the first and second layers are formed. To remove this over-prediction, surface mediation was accounted for to reduce the fluid-fluid interaction. Do and co-workers have introduced the so-called surface-mediation damping factor to correct the over-prediction for the case of a graphite surface of infinite extent, and this approach has yielded a good description of the adsorption isotherm. In this paper, the effects of the finite size of the graphene layer on the adsorption isotherm and how these would affect the extent of the surface mediation were studied. It was found that this finite-surface model provides a better description of the experimental data for graphitized thermal carbon black of high surface area (i.e. small crystallite size) while the infinite- surface model describes data for carbon black of very low surface area (i.e. large crystallite size).
Resumo:
Several procedures for calculating the heat of adsorption from Monte Carlo simulations for a heterogeneous adsorbent are presented. Simulations have been performed to generate isotherms for nitrogen at 77 K and methane at 273.15 K in graphitic slit pores of various widths. The procedures were then applied to calculate the heat of adsorption of an activated carbon with an arbitrary pore size distribution. The consistency of the different procedures shows them to be correct in calculating interaction energy contributions to the heat of adsorption. The currently favored procedure for this type of calculation, from the literature, is shown to be incorrect and in serious error when calculating the heat of adsorption of activated carbon.
Resumo:
Carbons with slitlike pores can serve as effective host materials for storage of hythane fuel, a bridge between the petrol combustion and hydrogen fuel cells. We have used grand canonical Monte Carlo simulation for the modeling of the hydrogen and methane mixture storage at 293 K and pressure of methane and hydrogen mixture up to 2 MPa. We have found that these pores serve as efficient vessels for the storage of hythane fuel near ambient temperatures and low pressures. We find that, for carbons having optimized slitlike pores of size H congruent to 7 angstrom ( pore width that can accommodate one adsorbed methane layer), and bulk hydrogen mole fraction >= 0.9, the volumetric stored energy exceeds the 2010 target of 5.4 MJ dm(-3) established by the U. S. FreedomCAR Partnership. At the same condition, the content of hydrogen in slitlike carbon pores is congruent to 7% by energy. Thus, we have obtained the composition corresponding to hythane fuel in carbon nanospaces with greatly enhanced volumetric energy in comparison to the traditional compression method. We proposed the simple system with added extra container filled with pure free/adsorbed methane for adjusting the composition of the desorbed mixture as needed during delivery. Our simulation results indicate that light slit pore carbon nanomaterials with optimized parameters are suitable filling vessels for storage of hythane fuel. The proposed simple system consisting of main vessel with physisorbed hythane fuel, and an extra container filled with pure free/adsorbed methane will be particularly suitable for combustion of hythane fuel in buses and passenger cars near ambient temperatures and low pressures.
Resumo:
Aim: To identify an appropriate dosage strategy for patients receiving enoxaparin by continuous intravenous infusion (CII). Methods: Monte Carlo simulations were performed in NONMEM, (200 replicates of 1000 patients) to predict steady state anti-Xa concentrations (Css) for patients receiving a CII of enoxaparin. The covariate distribution model was simulated based on covariate demographics in the CII study population. The impact of patient weight, renal function (creatinine clearance (CrCL)) and patient location (intensive care unit (ICU)) were evaluated. A population pharmacokinetic model was used as the input-output model (1-compartment first order output model with mixed residual error structure). Success of a dosing regimen was based on the percent of Css that is between the therapeutic range of 0.5 IU/ml to 1.2 IU/ml. Results: The best dose for patients in the ICU was 4.2IU/kg/h (success mean 64.8% and 90% prediction interval (PI): 60.1–69.8%) if CrCL60ml/min, the best dose was 8.3IU/kg/h (success mean 65.4%, 90% PI: 58.5–73.2%). Simulations suggest that there was a 50% improvement in the success of the CII if the dose rate for ICU patients with CrCL
Resumo:
The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.
Resumo:
An inherent weakness in the management of large scale projects is the failure to achieve the scheduled completion date. When projects are planned with the objective of time achievement, the initial planning plays a vital role in the successful achievement of project deadlines. Cost and quality are additional priorities when such projects are being executed. This article proposes a methodology for achieving time duration of a project through risk analysis with the application of a Monte Carlo simulation technique. The methodology is demonstrated using a case application of a cross-country petroleum pipeline construction project.
Resumo:
This paper introduces a new technique in the investigation of limited-dependent variable models. This paper illustrates that variable precision rough set theory (VPRS), allied with the use of a modern method of classification, or discretisation of data, can out-perform the more standard approaches that are employed in economics, such as a probit model. These approaches and certain inductive decision tree methods are compared (through a Monte Carlo simulation approach) in the analysis of the decisions reached by the UK Monopolies and Mergers Committee. We show that, particularly in small samples, the VPRS model can improve on more traditional models, both in-sample, and particularly in out-of-sample prediction. A similar improvement in out-of-sample prediction over the decision tree methods is also shown.
Resumo:
We investigate the feasibility of simultaneous suppressing of the amplification noise and nonlinearity, representing the most fundamental limiting factors in modern optical communication. To accomplish this task we developed a general design optimisation technique, based on concepts of noise and nonlinearity management. We demonstrate the immense efficiency of the novel approach by applying it to a design optimisation of transmission lines with periodic dispersion compensation using Raman and hybrid Raman-EDFA amplification. Moreover, we showed, using nonlinearity management considerations, that the optimal performance in high bit-rate dispersion managed fibre systems with hybrid amplification is achieved for a certain amplifier spacing – which is different from commonly known optimal noise performance corresponding to fully distributed amplification. Required for an accurate estimation of the bit error rate, the complete knowledge of signal statistics is crucial for modern transmission links with strong inherent nonlinearity. Therefore, we implemented the advanced multicanonical Monte Carlo (MMC) method, acknowledged for its efficiency in estimating distribution tails. We have accurately computed acknowledged for its efficiency in estimating distribution tails. We have accurately computed marginal probability density functions for soliton parameters, by numerical modelling of Fokker-Plank equation applying the MMC simulation technique. Moreover, applying a powerful MMC method we have studied the BER penalty caused by deviations from the optimal decision level in systems employing in-line 2R optical regeneration. We have demonstrated that in such systems the analytical linear approximation that makes a better fit in the central part of the regenerator nonlinear transfer function produces more accurate approximation of the BER and BER penalty. We present a statistical analysis of RZ-DPSK optical signal at direct detection receiver with Mach-Zehnder interferometer demodulation