901 resultados para stochastic simulation method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Superplastic bulging is the most successful application of superplastic forming (SPF) in industry, but the non-uniform wall thickness distribution of parts formed by it is a common technical problem yet to be overcome. Based on a rigid-viscoplastic finite element program developed by the authors, for simulation of the sheet superplastic forming process combined with the prediction of microstructure variations (such as grain growth and cavity growth), a simple and efficient preform design method is proposed and applied to the design of preform mould for manufacturing parts with uniform wall thickness. Examples of formed parts are presented here to demonstrate that the technology can be used to improve the uniformity of wall thickness to meet practical requirements. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Lattice Solid Model has been used successfully as a virtual laboratory to simulate fracturing of rocks, the dynamics of faults, earthquakes and gouge processes. However, results from those simulations show that in order to make the next step towards more realistic experiments it will be necessary to use models containing a significantly larger number of particles than current models. Thus, those simulations will require a greatly increased amount of computational resources. Whereas the computing power provided by single processors can be expected to increase according to Moore's law, i.e., to double every 18-24 months, parallel computers can provide significantly larger computing power today. In order to make this computing power available for the simulation of the microphysics of earthquakes, a parallel version of the Lattice Solid Model has been implemented. Benchmarks using large models with several millions of particles have shown that the parallel implementation of the Lattice Solid Model can achieve a high parallel-efficiency of about 80% for large numbers of processors on different computer architectures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proposed by M. Stutzer (1996), canonical valuation is a new method for valuing derivative securities under the risk-neutral framework. It is non-parametric, simple to apply, and, unlike many alternative approaches, does not require any option data. Although canonical valuation has great potential, its applicability in realistic scenarios has not yet been widely tested. This article documents the ability of canonical valuation to price derivatives in a number of settings. In a constant-volatility world, canonical estimates of option prices struggle to match a Black-Scholes estimate based on historical volatility. However, in a more realistic stochastic-volatility setting, canonical valuation outperforms the Black-Scholes model. As the volatility generating process becomes further removed from the constant-volatility world, the relative performance edge of canonical valuation is more evident. In general, the results are encouraging that canonical valuation is a useful technique for valuing derivatives. (C) 2005 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider the adsorption of argon on the surface of graphitized thermal carbon black and in slit pores at temperatures ranging from subcritical to supercritical conditions by the method of grand canonical Monte Carlo simulation. Attention is paid to the variation of the adsorbed density when the temperature crosses the critical point. The behavior of the adsorbed density versus pressure (bulk density) shows interesting behavior at temperatures in the vicinity of and those above the critical point and also at extremely high pressures. Isotherms at temperatures greater than the critical temperature exhibit a clear maximum, and near the critical temperature this maximum is a very sharp spike. Under the supercritical conditions and very high pressure the excess of adsorbed density decreases towards zero value for a graphite surface, while for slit pores negative excess density is possible at extremely high pressures. For imperfect pores (defined as pores that cannot accommodate an integral number of parallel layers under moderate conditions) the pressure at which the excess pore density becomes negative is less than that for perfect pores, and this is due to the packing effect in those imperfect pores. However, at extremely high pressure molecules can be packed in parallel layers once chemical potential is great enough to overcome the repulsions among adsorbed molecules. (c) 2005 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we apply a new method for the determination of surface area of carbonaceous materials, using the local surface excess isotherms obtained from the Grand Canonical Monte Carlo simulation and a concept of area distribution in terms of energy well-depth of solid–fluid interaction. The range of this well-depth considered in our GCMC simulation is from 10 to 100 K, which is wide enough to cover all carbon surfaces that we dealt with (for comparison, the well-depth for perfect graphite surface is about 58 K). Having the set of local surface excess isotherms and the differential area distribution, the overall adsorption isotherm can be obtained in an integral form. Thus, given the experimental data of nitrogen or argon adsorption on a carbon material, the differential area distribution can be obtained from the inversion process, using the regularization method. The total surface area is then obtained as the area of this distribution. We test this approach with a number of data in the literature, and compare our GCMC-surface area with that obtained from the classical BET method. In general, we find that the difference between these two surface areas is about 10%, indicating the need to reliably determine the surface area with a very consistent method. We, therefore, suggest the approach of this paper as an alternative to the BET method because of the long-recognized unrealistic assumptions used in the BET theory. Beside the surface area obtained by this method, it also provides information about the differential area distribution versus the well-depth. This information could be used as a microscopic finger-print of the carbon surface. It is expected that samples prepared from different precursors and different activation conditions will have distinct finger-prints. We illustrate this with Cabot BP120, 280 and 460 samples, and the differential area distributions obtained from the adsorption of argon at 77 K and nitrogen also at 77 K have exactly the same patterns, suggesting the characteristics of this carbon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The galvanic corrosion of magnesium alloy AZ91D coupled to a steel fastener was studied using a boundary element method (BEM) model and experimental measurements. The BEM model used the measured polarization curves as boundary conditions. The experimental program involved measuring the total corrosion rate as a function of distance from the interface of the magnesium in the form of a sheet containing a mild steel circular insert (5 to 30 mm in diameter). The measured total corrosion rate was interpreted as due to galvanic corrosion plus self corrosion. For a typical case, the self corrosion was estimated typically to be similar to 230 mm/y for an area surrounding the interface and to a distance of about I cm from the interface. Scanning Kelvin Probe Force Microscopy (SKPFM) revealed microgalvanic cells with potential differences of approximately 100 mV across the AZ91D surface. These microgalvanic cells may influence the relative contributions of galvanic and self corrosion to the total corrosion of AZ91D.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cereal-legume intercropping plays an important role in subsistence food production in developing countries, especially in situations of limited water resources. Crop simulation can be used to assess risk for intercrop productivity over time and space. In this study, a simple model for intercropping was developed for cereal and legume growth and yield, under semi-arid conditions. The model is based on radiation interception and use, and incorporates a water stress factor. Total dry matter and yield are functions of photosynthetically active radiation (PAR), the fraction of radiation intercepted and radiation use efficiency (RUE). One of two PAR sub-models was used to estimate PAR from solar radiation; either PAR is 50% of solar radiation or the ratio of PAR to solar radiation (PAR/SR) is a function of the clearness index (K-T). The fraction of radiation intercepted was calculated either based on Beer's Law with crop extinction coefficients (K) from field experiments or from previous reports. RUE was calculated as a function of available soil water to a depth of 900 mm (ASW). Either the soil water balance method or the decay curve approach was used to determine ASW. Thus, two alternatives for each of three factors, i.e., PAR/SR, K and ASW, were considered, giving eight possible models (2 methods x 3 factors). The model calibration and validation were carried out with maize-bean intercropping systems using data collected in a semi-arid region (Bloemfontein, Free State, South Africa) during seven growing seasons (1996/1997-2002/2003). The combination of PAR estimated from the clearness index, a crop extinction coefficient from the field experiment and the decay curve model gave the most reasonable and acceptable result. The intercrop model developed in this study is simple, so this modelling approach can be employed to develop other cereal-legume intercrop models for semi-arid regions. (c) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The adsorption of simple Lennard-Jones fluids in a carbon slit pore of finite length was studied with Canonical Ensemble (NVT) and Gibbs Ensemble Monte Carlo Simulations (GEMC). The Canonical Ensemble was a collection of cubic simulation boxes in which a finite pore resides, while the Gibbs Ensemble was that of the pore space of the finite pore. Argon was used as a model for Lennard-Jones fluids, while the adsorbent was modelled as a finite carbon slit pore whose two walls were composed of three graphene layers with carbon atoms arranged in a hexagonal pattern. The Lennard-Jones (LJ) 12-6 potential model was used to compute the interaction energy between two fluid particles, and also between a fluid particle and a carbon atom. Argon adsorption isotherms were obtained at 87.3 K for pore widths of 1.0, 1.5 and 2.0 nm using both Canonical and Gibbs Ensembles. These results were compared with isotherms obtained with corresponding infinite pores using Grand Canonical Ensembles. The effects of the number of cycles necessary to reach equilibrium, the initial allocation of particles, the displacement step and the simulation box size were particularly investigated in the Monte Carlo simulation with Canonical Ensembles. Of these parameters, the displacement step had the most significant effect on the performance of the Monte Carlo simulation. The simulation box size was also important, especially at low pressures at which the size must be sufficiently large to have a statistically acceptable number of particles in the bulk phase. Finally, it was found that the Canonical Ensemble and the Gibbs Ensemble both yielded the same isotherm (within statistical error); however, the computation time for GEMC was shorter than that for canonical ensemble simulation. However, the latter method described the proper interface between the reservoir and the adsorbed phase (and hence the meniscus).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cross-entropy (CE) method is a new generic approach to combinatorial and multi-extremal optimization and rare event simulation. The purpose of this tutorial is to give a gentle introduction to the CE method. We present the CE methodology, the basic algorithm and its modifications, and discuss applications in combinatorial optimization and machine learning. combinatorial optimization

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consider a network of unreliable links, modelling for example a communication network. Estimating the reliability of the network-expressed as the probability that certain nodes in the network are connected-is a computationally difficult task. In this paper we study how the Cross-Entropy method can be used to obtain more efficient network reliability estimation procedures. Three techniques of estimation are considered: Crude Monte Carlo and the more sophisticated Permutation Monte Carlo and Merge Process. We show that the Cross-Entropy method yields a speed-up over all three techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The existence of undesirable electricity price spikes in a competitive electricity market requires an efficient auction mechanism. However, many of the existing auction mechanism have difficulties in suppressing such unreasonable price spikes effectively. A new auction mechanism is proposed to suppress effectively unreasonable price spikes in a competitive electricity market. It optimally combines system marginal price auction and pay as bid auction mechanisms. A threshold value is determined to activate the switching between the marginal price auction and the proposed composite auction. Basically when the system marginal price is higher than the threshold value, the composite auction for high price electricity market is activated. The winning electricity sellers will sell their electricity at the system marginal price or their own bid prices, depending on their rights of being paid at the system marginal price and their offers' impact on suppressing undesirable price spikes. Such economic stimuli discourage sellers from practising economic and physical withholdings. Multiple price caps are proposed to regulate strong market power. We also compare other auction mechanisms to highlight the characteristics of the proposed one. Numerical simulation using the proposed auction mechanism is given to illustrate the procedure of this new auction mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we utilise a stochastic address model of broadcast oligopoly markets to analyse the Australian broadcast television market. In particular, we examine the effect of the presence of a single government market participant in this market. An examination of the dynamics of the simulations demonstrates that the presence of a government market participant can simultaneously generate positive outcomes for viewers as well as for other market suppliers. Further examination of simulation dynamics indicates that privatisation of the government market participant results in reduced viewer choice and diversity. We also demonstrate that additional private market participants would not result in significant benefits to viewers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The country-product-dummy (CPD) method, originally proposed in Summers (1973), has recently been revisited in its weighted formulation to handle a variety of data related situations (Rao and Timmer, 2000, 2003; Heravi et al., 2001; Rao, 2001; Aten and Menezes, 2002; Heston and Aten, 2002; Deaton et al., 2004). The CPD method is also increasingly being used in the context of hedonic modelling instead of its original purpose of filling holes in Summers (1973). However, the CPD method is seen, among practitioners, as a black box due to its regression formulation. The main objective of the paper is to establish equivalence of purchasing power parities and international prices derived from the application of the weighted-CPD method with those arising out of the Rao-system for multilateral comparisons. A major implication of this result is that the weighted-CPD method would then be a natural method of aggregation at all levels of aggregation within the context of international comparisons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The published requirements for accurate measurement of heat transfer at the interface between two bodies have been reviewed. A strategy for reliable measurement has been established, based on the depth of the temperature sensors in the medium, on the inverse method parameters and on the time response of the sensors. Sources of both deterministic and stochastic errors have been investigated and a method to evaluate them has been proposed, with the help of a normalisation technique. The key normalisation variables are the duration of the heat input and the maximum heat flux density. An example of application of this technique in the field of high pressure die casting is demonstrated. The normalisation study, coupled with previous determination of the heat input duration, makes it possible to determine the optimum location for the sensors, along with an acceptable sampling rate and the thermocouples critical response-time (as well as eventual filter characteristics). Results from the gauge are used to assess the suitability of the initial design choices. In particular the unavoidable response time of the thermocouples is estimated by comparison with the normalised simulation. (c) 2006 Elsevier Ltd. All rights reserved.