974 resultados para Monte - Carlo study


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The current study analyzes the leachate distribution in the Orchard Hills Landfill, Davis Junction, Illinois, using a two-phase flow model to assess the influence of variability in hydraulic conductivity on the effectiveness of the existing leachate recirculation system and its operations through reliability analysis. Numerical modeling, using finite-difference code, is performed with due consideration to the spatial variation of hydraulic conductivity of the municipal solid waste (MSW). The inhomogeneous and anisotropic waste condition is assumed because it is a more realistic representation of the MSW. For the reliability analysis, the landfill is divided into 10 MSW layers with different mean values of vertical and horizontal hydraulic conductivities (decreasing from top to bottom), and the parametric study is performed by taking the coefficients of variation (COVs) as 50, 100, 150, and 200%. Monte Carlo simulations are performed to obtain statistical information (mean and COV) of output parameters of the (1) wetted area of the MSW, (2) maximum induced pore pressure, and (3) leachate outflow. The results of the reliability analysis are used to determine the influence of hydraulic conductivity on the effectiveness of the leachate recirculation and are discussed in the light of a deterministic approach. The study is useful in understanding the efficiency of the leachate recirculation system. (C) 2013 American Society of Civil Engineers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Stochastic modelling is a useful way of simulating complex hard-rock aquifers as hydrological properties (permeability, porosity etc.) can be described using random variables with known statistics. However, very few studies have assessed the influence of topological uncertainty (i.e. the variability of thickness of conductive zones in the aquifer), probably because it is not easy to retrieve accurate statistics of the aquifer geometry, especially in hard rock context. In this paper, we assessed the potential of using geophysical surveys to describe the geometry of a hard rock-aquifer in a stochastic modelling framework. The study site was a small experimental watershed in South India, where the aquifer consisted of a clayey to loamy-sandy zone (regolith) underlain by a conductive fissured rock layer (protolith) and the unweathered gneiss (bedrock) at the bottom. The spatial variability of the thickness of the regolith and fissured layers was estimated by electrical resistivity tomography (ERT) profiles, which were performed along a few cross sections in the watershed. For stochastic analysis using Monte Carlo simulation, the generated random layer thickness was made conditional to the available data from the geophysics. In order to simulate steady state flow in the irregular domain with variable geometry, we used an isoparametric finite element method to discretize the flow equation over an unstructured grid with irregular hexahedral elements. The results indicated that the spatial variability of the layer thickness had a significant effect on reducing the simulated effective steady seepage flux and that using the conditional simulations reduced the uncertainty of the simulated seepage flux. As a conclusion, combining information on the aquifer geometry obtained from geophysical surveys with stochastic modelling is a promising methodology to improve the simulation of groundwater flow in complex hard-rock aquifers. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Solid-solid collapse transition in open framework structures is ubiquitous in nature. The real difficulty in understanding detailed microscopic aspects of such transitions in molecular systems arises from the interplay between different energy and length scales involved in molecular systems, often mediated through a solvent. In this work we employ Monte-Carlo simulation to study the collapse transition in a model molecular system interacting via both isotropic as well as anisotropic interactions having different length and energy scales. The model we use is known as Mercedes-Benz (MB), which, for a specific set of parameters, sustains two solid phases: honeycomb and oblique. In order to study the temperature induced collapse transition, we start with a metastable honeycomb solid and induce transition by increasing temperature. High density oblique solid so formed has two characteristic length scales corresponding to isotropic and anisotropic parts of interaction potential. Contrary to the common belief and classical nucleation theory, interestingly, we find linear strip-like nucleating clusters having significantly different order and average coordination number than the bulk stable phase. In the early stage of growth, the cluster grows as a linear strip, followed by branched and ring-like strips. The geometry of growing cluster is a consequence of the delicate balance between two types of interactions, which enables the dominance of stabilizing energy over destabilizing surface energy. The nucleus of stable oblique phase is wetted by intermediate order particles, which minimizes the surface free energy. In the case of pressure induced transition at low temperature the collapsed state is a disordered solid. The disordered solid phase has diverse local quasi-stable structures along with oblique-solid like domains. (C) 2013 AIP Publishing LLC.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Estimation of design quantiles of hydrometeorological variables at critical locations in river basins is necessary for hydrological applications. To arrive at reliable estimates for locations (sites) where no or limited records are available, various regional frequency analysis (RFA) procedures have been developed over the past five decades. The most widely used procedure is based on index-flood approach and L-moments. It assumes that values of scale and shape parameters of frequency distribution are identical across all the sites in a homogeneous region. In real-world scenario, this assumption may not be valid even if a region is statistically homogeneous. To address this issue, a novel mathematical approach is proposed. It involves (i) identification of an appropriate frequency distribution to fit the random variable being analyzed for homogeneous region, (ii) use of a proposed transformation mechanism to map observations of the variable from original space to a dimensionless space where the form of distribution does not change, and variation in values of its parameters is minimal across sites, (iii) construction of a growth curve in the dimensionless space, and (iv) mapping the curve to the original space for the target site by applying inverse transformation to arrive at required quantile(s) for the site. Effectiveness of the proposed approach (PA) in predicting quantiles for ungauged sites is demonstrated through Monte Carlo simulation experiments considering five frequency distributions that are widely used in RFA, and by case study on watersheds in conterminous United States. Results indicate that the PA outperforms methods based on index-flood approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of time variant reliability analysis of randomly parametered and randomly driven nonlinear vibrating systems is considered. The study combines two Monte Carlo variance reduction strategies into a single framework to tackle the problem. The first of these strategies is based on the application of the Girsanov transformation to account for the randomness in dynamic excitations, and the second approach is fashioned after the subset simulation method to deal with randomness in system parameters. Illustrative examples include study of single/multi degree of freedom linear/non-linear inelastic randomly parametered building frame models driven by stationary/non-stationary, white/filtered white noise support acceleration. The estimated reliability measures are demonstrated to compare well with results from direct Monte Carlo simulations. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study considers linear filtering methods for minimising the end-to-end average distortion of a fixed-rate source quantisation system. For the source encoder, both scalar and vector quantisation are considered. The codebook index output by the encoder is sent over a noisy discrete memoryless channel whose statistics could be unknown at the transmitter. At the receiver, the code vector corresponding to the received index is passed through a linear receive filter, whose output is an estimate of the source instantiation. Under this setup, an approximate expression for the average weighted mean-square error (WMSE) between the source instantiation and the reconstructed vector at the receiver is derived using high-resolution quantisation theory. Also, a closed-form expression for the linear receive filter that minimises the approximate average WMSE is derived. The generality of framework developed is further demonstrated by theoretically analysing the performance of other adaptation techniques that can be employed when the channel statistics are available at the transmitter also, such as joint transmit-receive linear filtering and codebook scaling. Monte Carlo simulation results validate the theoretical expressions, and illustrate the improvement in the average distortion that can be obtained using linear filtering techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Regionalization approaches are widely used in water resources engineering to identify hydrologically homogeneous groups of watersheds that are referred to as regions. Pooled information from sites (depicting watersheds) in a region forms the basis to estimate quantiles associated with hydrological extreme events at ungauged/sparsely gauged sites in the region. Conventional regionalization approaches can be effective when watersheds (data points) corresponding to different regions can be separated using straight lines or linear planes in the space of watershed related attributes. In this paper, a kernel-based Fuzzy c-means (KFCM) clustering approach is presented for use in situations where such linear separation of regions cannot be accomplished. The approach uses kernel-based functions to map the data points from the attribute space to a higher-dimensional space where they can be separated into regions by linear planes. A procedure to determine optimal number of regions with the KFCM approach is suggested. Further, formulations to estimate flood quantiles at ungauged sites with the approach are developed. Effectiveness of the approach is demonstrated through Monte-Carlo simulation experiments and a case study on watersheds in United States. Comparison of results with those based on conventional Fuzzy c-means clustering, Region-of-influence approach and a prior study indicate that KFCM approach outperforms the other approaches in forming regions that are closer to being statistically homogeneous and in estimating flood quantiles at ungauged sites. Key Points -list-0001'' list-type=''plain''> -item id=''wrcr20861-li-0001''>Kernel-based regionalization approach is presented for flood frequency analysis -item id=''wrcr20861-li-0002''>Kernel procedure to estimate flood quantiles at ungauged sites is developed -item id=''wrcr20861-li-0003''>A set of fuzzy regions is delineated in Ohio, USA

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Adsorption experiments of mixtures of long chain alkanes into silicalite under liquid phase conditions show selectivity inversion and azeotrope formation. These effects are due to the subtle interplay between the size of the adsorbed molecules and pore topology of the adsorbent. In this study, the selective uptake of lighter component during liquid phase adsorption of C/C and C/C n-alkane binary mixtures in the zeolite silicalite is understood through configurational bias grand-canonical Monte Carlo molecular simulation technique and a coarse-grained siting analysis. The simulations are conducted under conditions of low and intermediate levels of loading. The siting pattern of the adsorbates inside the zeolite pores explain the selectivity as seen in experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the equilibrium properties of an Ising model on a disordered random network where the disorder can be quenched or annealed. The network consists of fourfold coordinated sites connected via variable length one-dimensional chains. Our emphasis is on nonuniversal properties and we consider the transition temperature and other equilibrium thermodynamic properties, including those associated with one-dimensional fluctuations arising from the chains. We use analytic methods in the annealed case, and a Monte Carlo simulation for the quenched disorder. Our objective is to study the difference between quenched and annealed results with a broad random distribution of interaction parameters. The former represents a situation where the time scale associated with the randomness is very long and the corresponding degrees of freedom can be viewed as frozen, while the annealed case models the situation where this is not so. We find that the transition temperature and the entropy associated with one-dimensional fluctuations are always higher for quenched disorder than in the annealed case. These differences increase with the strength of the disorder up to a saturating value. We discuss our results in connection to physical systems where a broad distribution of interaction strengths is present.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Development of microporous adsorbents for separation and sequestration of carbon dioxide from flue gas streams is an area of active research. In this study, we assess the influence of specific functional groups on the adsorption selectivity of CO2/N-2 mixtures through Grand Canonical Monte Carlo (GCMC) simulations. Our model system consists of a bilayer graphene nanoribbon that has been edge functionalized with OH, NH2, NO2, CH3 and COOH. Ab initio Moller-Plesset (MP2) calculations with functionalized benzenes are used to obtain binding energies and optimized geometries for CO2 and N-2. This information is used to validate the choice classical forcefields in GCMC simulations. In addition to simulations of adsorption from binary mixtures of CO2 and N-2, the ideal adsorbed solution theory (IAST) is used to predict mixture isotherms. Our study reveals that functionalization always leads to an increase in the adsorption of both CO2 and N-2 with the highest for COOH. However, significant enhancement in the selectivity for CO2 is only seen with COOH functionalized nanoribbons. The COOH functionalization gives a 28% increase in selectivity compared to H terminated nanoribbons, whereas the improvement in the selectivity for other functional groups are much Enure modest. Our study suggests that specific functionalization with COOH groups can provide a material's design strategy to improve CO2 selectivity in microporous adsorbents. Synthesis of graphene nanoplatelets with edge functionalized COOH, which has the potential for large scale production, has recently been reported (Jeon el, al., 2012). (C) 2014 Elsevier Ltd. All rights reserved,

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When Markov chain Monte Carlo (MCMC) samplers are used in problems of system parameter identification, one would face computational difficulties in dealing with large amount of measurement data and (or) low levels of measurement noise. Such exigencies are likely to occur in problems of parameter identification in dynamical systems when amount of vibratory measurement data and number of parameters to be identified could be large. In such cases, the posterior probability density function of the system parameters tends to have regions of narrow supports and a finite length MCMC chain is unlikely to cover pertinent regions. The present study proposes strategies based on modification of measurement equations and subsequent corrections, to alleviate this difficulty. This involves artificial enhancement of measurement noise, assimilation of transformed packets of measurements, and a global iteration strategy to improve the choice of prior models. Illustrative examples cover laboratory studies on a time variant dynamical system and a bending-torsion coupled, geometrically non-linear building frame under earthquake support motions. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper addresses the problem of intercepting highly maneuverable threats using seeker-less interceptors that operate in the command guidance mode. These systems are more prone to estimation errors than standard seeker-based systems. In this paper, an integrated estimation/guidance (IEG) algorithm, which combines interactive multiple model (IMM) estimator with differential game guidance law (DGL), is proposed for seeker-less interception. In this interception scenario, the target performs an evasive bang-bang maneuver, while the sensor has noisy measurements and the interceptor is subject to acceleration bound. The IMM serves as a basis for the synthesis of efficient filters for tracking maneuvering targets and reducing estimation errors. The proposed game-based guidance law for two-dimensional interception, later extended to three-dimensional interception scenarios, is used to improve the endgame performance of the command-guided seeker-less interceptor. The IMM scheme and an optimal selection of filters, to cater to various maneuvers that are expected during the endgame, are also described. Furthermore, a chatter removal algorithm is introduced, thus modifying the differential game guidance law (modified DGL). A comparison between modified DGL guidance law and conventional proportional navigation guidance law demonstrates significant improvement in miss distance in a pursuer-evader scenario. Simulation results are also presented for varying flight path angle errors. A numerical study is provided which demonstrates the performance of the combined interactive multiple model with game-based guidance law (IMM/DGL). Simulation study is also carried out for combined IMM and modified DGL (IMM/modified DGL) which exhibits the superior performance and viability of the algorithm reducing the chattering phenomenon. The results are illustrated by an extensive Monte Carlo simulation study in the presence of estimation errors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Quantifying distributional behavior of extreme events is crucial in hydrologic designs. Intensity Duration Frequency (IDF) relationships are used extensively in engineering especially in urban hydrology, to obtain return level of extreme rainfall event for a specified return period and duration. Major sources of uncertainty in the IDF relationships are due to insufficient quantity and quality of data leading to parameter uncertainty due to the distribution fitted to the data and uncertainty as a result of using multiple GCMs. It is important to study these uncertainties and propagate them to future for accurate assessment of return levels for future. The objective of this study is to quantify the uncertainties arising from parameters of the distribution fitted to data and the multiple GCM models using Bayesian approach. Posterior distribution of parameters is obtained from Bayes rule and the parameters are transformed to obtain return levels for a specified return period. Markov Chain Monte Carlo (MCMC) method using Metropolis Hastings algorithm is used to obtain the posterior distribution of parameters. Twenty six CMIP5 GCMs along with four RCP scenarios are considered for studying the effects of climate change and to obtain projected IDF relationships for the case study of Bangalore city in India. GCM uncertainty due to the use of multiple GCMs is treated using Reliability Ensemble Averaging (REA) technique along with the parameter uncertainty. Scale invariance theory is employed for obtaining short duration return levels from daily data. It is observed that the uncertainty in short duration rainfall return levels is high when compared to the longer durations. Further it is observed that parameter uncertainty is large compared to the model uncertainty. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Regional frequency analysis is widely used for estimating quantiles of hydrological extreme events at sparsely gauged/ungauged target sites in river basins. It involves identification of a region (group of watersheds) resembling watershed of the target site, and use of information pooled from the region to estimate quantile for the target site. In the analysis, watershed of the target site is assumed to completely resemble watersheds in the identified region in terms of mechanism underlying generation of extreme event. In reality, it is rare to find watersheds that completely resemble each other. Fuzzy clustering approach can account for partial resemblance of watersheds and yield region(s) for the target site. Formation of regions and quantile estimation requires discerning information from fuzzy-membership matrix obtained based on the approach. Practitioners often defuzzify the matrix to form disjoint clusters (regions) and use them as the basis for quantile estimation. The defuzzification approach (DFA) results in loss of information discerned on partial resemblance of watersheds. The lost information cannot be utilized in quantile estimation, owing to which the estimates could have significant error. To avert the loss of information, a threshold strategy (TS) was considered in some prior studies. In this study, it is analytically shown that the strategy results in under-prediction of quantiles. To address this, a mathematical approach is proposed in this study and its effectiveness in estimating flood quantiles relative to DFA and TS is demonstrated through Monte-Carlo simulation experiments and case study on Mid-Atlantic water resources region, USA. (C) 2015 Elsevier B.V. All rights reserved.