962 resultados para Monte-Carlo-Simulation
Resumo:
This paper presents an accurate and efficient solution for the random transverse and angular displacement fields of uncertain Timoshenko beams. Approximate, numerical solutions are obtained using the Galerkin method and chaos polynomials. The Chaos-Galerkin scheme is constructed by respecting the theoretical conditions for existence and uniqueness of the solution. Numerical results show fast convergence to the exact solution, at excellent accuracies. The developed Chaos-Galerkin scheme accurately approximates the complete cumulative distribution function of the displacement responses. The Chaos-Galerkin scheme developed herein is a theoretically sound and efficient method for the solution of stochastic problems in engineering. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The selection criteria for Euler-Bernoulli or Timoshenko beam theories are generally given by means of some deterministic rule involving beam dimensions. The Euler-Bernoulli beam theory is used to model the behavior of flexure-dominated (or ""long"") beams. The Timoshenko theory applies for shear-dominated (or ""short"") beams. In the mid-length range, both theories should be equivalent, and some agreement between them would be expected. Indeed, it is shown in the paper that, for some mid-length beams, the deterministic displacement responses for the two theories agrees very well. However, the article points out that the behavior of the two beam models is radically different in terms of uncertainty propagation. In the paper, some beam parameters are modeled as parameterized stochastic processes. The two formulations are implemented and solved via a Monte Carlo-Galerkin scheme. It is shown that, for uncertain elasticity modulus, propagation of uncertainty to the displacement response is much larger for Timoshenko beams than for Euler-Bernoulli beams. On the other hand, propagation of the uncertainty for random beam height is much larger for Euler beam displacements. Hence, any reliability or risk analysis becomes completely dependent on the beam theory employed. The authors believe this is not widely acknowledged by the structural safety or stochastic mechanics communities. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, the Askey-Wiener scheme and the Galerkin method are used to obtain approximate solutions to stochastic beam bending on Winkler foundation. The study addresses Euler-Bernoulli beams with uncertainty in the bending stiffness modulus and in the stiffness of the foundation. Uncertainties are represented by parameterized stochastic processes. The random behavior of beam response is modeled using the Askey-Wiener scheme. One contribution of the paper is a sketch of proof of existence and uniqueness of the solution to problems involving fourth order operators applied to random fields. From the approximate Galerkin solution, expected value and variance of beam displacement responses are derived, and compared with corresponding estimates obtained via Monte Carlo simulation. Results show very fast convergence and excellent accuracies in comparison to Monte Carlo simulation. The Askey-Wiener Galerkin scheme presented herein is shown to be a theoretically solid and numerically efficient method for the solution of stochastic problems in engineering.
Resumo:
This article discusses the main aspects of the Brazilian real estate market in order to illustrate if it would be attractive for a typical American real estate investor to buy office-building portfolios in Brazil. The article emphasizes: [i] - the regulatory frontiers, comparing investment securitization, using a typical American REIT structure, with the Brazilian solution, using the Fundo de Investimento Imobiliario - FII; [ii] - the investment quality attributes in the Brazilian market, using an office building prototype, and [iii] - the comparison of [risk vs. yield] generated by an investment in the Brazilian market, using a FII, benchmarked against an existing REIT (OFFICE SUB-SECTOR) in the USA market. We conclude that investing dollars exchanged for Reais [the Brazilian currency] in a FII with a triple A office-building portfolio in the Sao Paulo marketplace will yield an annual income and a premium return above an American REIT investment. The highly aggressive scenario, along with the strong persistent exchange rate detachment to the IGP-M variations, plus instabilities affecting the generation of income, and even if we adopt a 300-point margin for the Brazil-Risk level, demonstrates that an investment opportunity in the Brazilian market, in the segment we have analyzed, outperforms an equivalent investment in the American market.
Resumo:
In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A novel methodology to assess the risk of power transformer failures caused by external faults, such as short-circuit, taking the paper insulation condition into account, is presented. The risk index is obtained by contrasting the insulation paper condition with the probability that the transformer withstands the short-circuit current flowing along the winding during an external fault. In order to assess the risk, this probability and the value of the degree of polymerization of the insulating paper are regarded as inputs of a type-2 fuzzy logic system (T2-FLS), which computes the fuzzy risk level. A Monte Carlo simulation has been used to find the survival function of the currents flowing through the transformer winding during a single-phase or a three-phase short-circuit. The Roy Billinton Test System and a real power system have been used to test the results. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Market-based transmission expansion planning gives information to investors on where is the most cost efficient place to invest and brings benefits to those who invest in this grid. However, both market issue and power system adequacy problems are system planers’ concern. In this paper, a hybrid probabilistic criterion of Expected Economical Loss (EEL) is proposed as an index to evaluate the systems’ overall expected economical losses during system operation in a competitive market. It stands on both investors’ and planner’s point of view and will further improves the traditional reliability cost. By applying EEL, it is possible for system planners to obtain a clear idea regarding the transmission network’s bottleneck and the amount of losses arises from this weak point. Sequentially, it enables planners to assess the worth of providing reliable services. Also, the EEL will contain valuable information for moneymen to undertake their investment. This index could truly reflect the random behaviors of power systems and uncertainties from electricity market. The performance of the EEL index is enhanced by applying Normalized Coefficient of Probability (NCP), so it can be utilized in large real power systems. A numerical example is carried out on IEEE Reliability Test System (RTS), which will show how the EEL can predict the current system bottleneck under future operational conditions and how to use EEL as one of planning objectives to determine future optimal plans. A well-known simulation method, Monte Carlo simulation, is employed to achieve the probabilistic characteristic of electricity market and Genetic Algorithms (GAs) is used as a multi-objective optimization tool.
Resumo:
I shall discuss the quantum and classical dynamics of a class of nonlinear Hamiltonian systems. The discussion will be restricted to systems with one degree of freedom. Such systems cannot exhibit chaos, unless the Hamiltonians are time dependent. Thus we shall consider systems with a potential function that has a higher than quadratic dependence on the position and, furthermore, we shall allow the potential function to be a periodic function of time. This is the simplest class of Hamiltonian system that can exhibit chaotic dynamics. I shall show how such systems can be realized in atom optics, where very cord atoms interact with optical dipole potentials of a far-off resonance laser. Such systems are ideal for quantum chaos studies as (i) the energy of the atom is small and action scales are of the order of Planck's constant, (ii) the systems are almost perfectly isolated from the decohering effects of the environment and (iii) optical methods enable exquisite time dependent control of the mechanical potentials seen by the atoms.
Resumo:
This article deals with the efficiency of fractional integration parameter estimators. This study was based on Monte Carlo experiments involving simulated stochastic processes with integration orders in the range]-1,1[. The evaluated estimation methods were classified into two groups: heuristics and semiparametric/maximum likelihood (ML). The study revealed that the comparative efficiency of the estimators, measured by the lesser mean squared error, depends on the stationary/non-stationary and persistency/anti-persistency conditions of the series. The ML estimator was shown to be superior for stationary persistent processes; the wavelet spectrum-based estimators were better for non-stationary mean reversible and invertible anti-persistent processes; the weighted periodogram-based estimator was shown to be superior for non-invertible anti-persistent processes.
Resumo:
Objective: The Assessing Cost-Effectiveness - Mental Health (ACE-MH) study aims to assess from a health sector perspective, whether there are options for change that could improve the effectiveness and efficiency of Australia's current mental health services by directing available resources toward 'best practice' cost-effective services. Method: The use of standardized evaluation methods addresses the reservations expressed by many economists about the simplistic use of League Tables based on economic studies confounded by differences in methods, context and setting. The cost-effectiveness ratio for each intervention is calculated using economic and epidemiological data. This includes systematic reviews and randomised controlled trials for efficacy, the Australian Surveys of Mental Health and Wellbeing for current practice and a combination of trials and longitudinal studies for adherence. The cost-effectiveness ratios are presented as cost (A$) per disability-adjusted life year (DALY) saved with a 95% uncertainty interval based on Monte Carlo simulation modelling. An assessment of interventions on 'second filter' criteria ('equity', 'strength of evidence', 'feasibility' and 'acceptability to stakeholders') allows broader concepts of 'benefit' to be taken into account, as well as factors that might influence policy judgements in addition to cost-effectiveness ratios. Conclusions: The main limitation of the study is in the translation of the effect size from trials into a change in the DALY disability weight, which required the use of newly developed methods. While comparisons within disorders are valid, comparisons across disorders should be made with caution. A series of articles is planned to present the results.
Resumo:
Hepatitis C virus (HCV) is a frequent cause of acute and chronic hepatitis and a leading cause for cirrhosis of the liver and hepatocellular carcinoma. HCV is classified in six major genotypes and more than 70 subtypes. In Colombian blood banks, serum samples were tested for anti-HCV antibodies using a third-generation ELISA. The aim of this study was to characterize the viral sequences in plasma of 184 volunteer blood donors who attended the ""Banco Nacional de Sangre de la Cruz Roja Colombiana,`` Bogota, Colombia. Three different HCV genomic regions were amplified by nested PCR. The first of these was a segment of 180 bp of the 5`UTR region to confirm the previous diagnosis by ELISA. From those that were positive to the 5`UTR region, two further segments were amplified for genotyping and subtyping by phylogenetic analysis: a segment of 380 bp from the NS5B region; and a segment of 391 bp from the E1 region. The distribution of HCV subtypes was: 1b (82.8%), 1a (5.7%), 2a (5.7%), 2b (2.8%), and 3a (2.8%). By applying Bayesian Markov chain Monte Carlo simulation, it was estimated that HCV-1b was introduced into Bogota around 1950. Also, this subtype spread at an exponential rate between about 1970 to about 1990, after which transmission of HCV was reduced by anti-HCV testing of this population. Among Colombian blood donors, HCV genotype 1b is the most frequent genotype, especially in large urban conglomerates such as Bogota, as is the case in other South American countries. J. Med. Virol. 82: 1889-1898, 2010. (C) 2010 Wiley-Liss, Inc.
Resumo:
A significant loss in electron probe current can occur before the electron beam enters the specimen chamber of an environmental scanning electron microscope (ESEM). This loss results from electron scattering in a gaseous jet formed inside and downstream (above) the pressure-limiting aperture (PLA), which separates the high-pressure and high-vacuum regions of the microscope. The electron beam loss above the PLA has been calculated for three different ESEMs, each with a different PLA geometry: an ElectroScan E3, a Philips XL30 ESEM, and a prototype instrument. The mass thickness of gas above the PLA in each case has been determined by Monte Carlo simulation of the gas density variation in the gas jet. It has been found that the PLA configurations used in the commercial instruments produce considerable loss in the electron probe current that dramatically degrades their performance at high chamber pressure and low accelerating voltage. These detrimental effects are minimized in the prototype instrument, which has an optimized thin-foil PLA design.
Resumo:
1. There are a variety of methods that could be used to increase the efficiency of the design of experiments. However, it is only recently that such methods have been considered in the design of clinical pharmacology trials. 2. Two such methods, termed data-dependent (e.g. simulation) and data-independent (e.g. analytical evaluation of the information in a particular design), are becoming increasingly used as efficient methods for designing clinical trials. These two design methods have tended to be viewed as competitive, although a complementary role in design is proposed here. 3. The impetus for the use of these two methods has been the need for a more fully integrated approach to the drug development process that specifically allows for sequential development (i.e. where the results of early phase studies influence later-phase studies). 4. The present article briefly presents the background and theory that underpins both the data-dependent and -independent methods with the use of illustrative examples from the literature. In addition, the potential advantages and disadvantages of each method are discussed.
Resumo:
An important feature of improving lattice gas models and classical isotherms is the incorporation of a pore size dependent capacity, which has hitherto been overlooked. In this paper, we develop a model for predicting the temperature dependent variation in capacity with pore size. The model is based on the analysis of a lattice gas model using a density functional theory approach at the close packed limit. Fluid-fluid and solid-fluid interactions are modeled by the Lennard-Jones 12-6 potential and Steele's 10-4-3, potential respectively. The capacity of methane in a slit-shaped carbon pore is calculated from the characteristic parameters of the unit cell, which are extracted by minimizing the grand potential of the unit cell. The capacities predicted by the proposed model are in good agreement with those obtained from grand canonical Monte Carlo simulation, for pores that can accommodate up to three adsorbed layers. Single particle and pair distributions exhibit characteristic features that correspond to the sequence of buckling and rhombic transitions that occur as the slit pore width is increased. The model provides a useful tool to model continuous variation in the microstructure of an adsorbed phase, namely buckling and rhombic transitions, with increasing pore width. (C) 2002 American Institute of Physics.
Resumo:
In this paper we analyzed the adsorption of gases and vapors on graphitised thermal carbon black by using a modified DFT-lattice theory, in which we assume that the behavior of the first layer in the adsorption film is different from those of second and higher layers. The effects of various parameters on the topology of the adsorption isotherm were first investigated, and the model was then applied in the analysis of adsorption data of numerous substances on carbon black. We have found that the first layer in the adsorption film behaves differently from the second and higher layers in such a way that the adsorbate-adsorbate interaction energy in the first layer is less than that of second and higher layers, and the same is observed for the partition function. Furthermore, the adsorbate-adsorbate and adsorbate-adsorbent interaction energies obtained from the fitting are consistently lower than the corresponding values obtained from the viscosity data and calculated from the Lorentz-Berthelot rule, respectively.