164 resultados para Probabilistic methods
Resumo:
The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Probabilistic analysis of cracking moment from 22 simply supported reinforced concrete beams is performed. When the basic variables follow the distribution considered in this study, the cracking moment of a beam is found to follow a normal distribution. An expression is derived, for characteristic cracking moment, which will be useful in examining reinforced concrete beams for a limit state of cracking.
Resumo:
A general analysis of the Hamilton-Jacobi form of dynamics motivated by phase space methods and classical transformation theory is presented. The connection between constants of motion, symmetries, and the Hamilton-Jacobi equation is described.
Resumo:
In this paper we have discussed limits of the validity of Whitham's characteristic rule for finding successive positions of a shock in one space dimension. We start with an example for which the exact solution is known and show that the characteristic rule gives correct result only if the state behind the shock is uniform. Then we take the gas dynamic equations in two cases: one of a shock propagating through a stratified layer and other down a nonuniform tube and derive exact equations for the evolution of the shock amplitude along a shock path. These exact results are then compared with the results obtained by the characteristic rule. The characteristic rule not only incorrectly accounts for the deviation of the state behind the shock from a uniform state but also gives a coefficient in the equation which differ significantly from the exact coefficients for a wide range of values of the shock strength.
Resumo:
In this paper, a novel genetic algorithm is developed by generating artificial chromosomes with probability control to solve the machine scheduling problems. Generating artificial chromosomes for Genetic Algorithm (ACGA) is closely related to Evolutionary Algorithms Based on Probabilistic Models (EAPM). The artificial chromosomes are generated by a probability model that extracts the gene information from current population. ACGA is considered as a hybrid algorithm because both the conventional genetic operators and a probability model are integrated. The ACGA proposed in this paper, further employs the ``evaporation concept'' applied in Ant Colony Optimization (ACO) to solve the permutation flowshop problem. The ``evaporation concept'' is used to reduce the effect of past experience and to explore new alternative solutions. In this paper, we propose three different methods for the probability of evaporation. This probability of evaporation is applied as soon as a job is assigned to a position in the permutation flowshop problem. Experimental results show that our ACGA with the evaporation concept gives better performance than some algorithms in the literature.
Resumo:
A pure sample of nitrosyl chloride has been prepared either by reaction of phosphorus trichloride with concentrated nitric acid or by reaction of phosphorus trichloride with sodium nitrate in presence of water. The nitrosyl chloride gas has been characterized by i.r. spectral data and elemental analysis.
Resumo:
Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
An a priori error analysis of discontinuous Galerkin methods for a general elliptic problem is derived under a mild elliptic regularity assumption on the solution. This is accomplished by using some techniques from a posteriori error analysis. The model problem is assumed to satisfy a GAyenrding type inequality. Optimal order L (2) norm a priori error estimates are derived for an adjoint consistent interior penalty method.
Resumo:
Six models (Simulators) are formulated and developed with all possible combinations of pressure and saturation of the phases as primary variables. A comparative study between six simulators with two numerical methods, conventional simultaneous and modified sequential methods are carried out. The results of the numerical models are compared with the laboratory experimental results to study the accuracy of the model especially in heterogeneous porous media. From the study it is observed that the simulator using pressure and saturation of the wetting fluid (PW, SW formulation) is the best among the models tested. Many simulators with nonwetting phase as one of the primary variables did not converge when used along with simultaneous method. Based on simulator 1 (PW, SW formulation), a comparison of different solution methods such as simultaneous method, modified sequential and adaptive solution modified sequential method are carried out on 4 test problems including heterogeneous and randomly heterogeneous problems. It is found that the modified sequential and adaptive solution modified sequential methods could save the memory by half and as also the CPU time required by these methods is very less when compared with that using simultaneous method. It is also found that the simulator with PNW and PW as the primary variable which had problem of convergence using the simultaneous method, converged using both the modified sequential method and also using adaptive solution modified sequential method. The present study indicates that pressure and saturation formulation along with adaptive solution modified sequential method is the best among the different simulators and methods tested.
Resumo:
NDDO-based (AM1) configuration interaction (CI) calculations have been used to calculate the wavelength and oscillator strengths of electronic absorptions in organic molecules and the results used in a sum-over-states treatment to calculate second-order-hyperpolarizabilities. The results for both spectra and hyperpolarizabilities are of acceptable quality as long as a suitable CI-expansion is used. We have found that using an active space of eight electrons in eight orbitals and including all single and pair-double excitations in the CI leads to results that agree well with experiment and that do not change significantly with increasing active space for most organic molecules. Calculated second-order hyperpolarizabilities using this type of CI within a sum-over-states calculation appear to be of useful accuracy.
Resumo:
Graphenes with varying number of layers can be synthesized by using different strategies. Thus, single-layer graphene is prepared by micromechanical cleavage, reduction of single-layer graphene oxide, chemical vapor deposition and other methods. Few-layer graphenes are synthesized by conversion of nanodiamond, arc discharge of graphite and other methods. In this article, we briefly overview the various synthetic methods and the surface, magnetic and electrical properties of the produced graphenes. Few-layer graphenes exhibit ferromagnetic features along with antiferromagnetic properties, independent of the method of preparation. Aside from the data on electrical conductivity of graphenes and graphene-polymer composites, we also present the field-effect transistor characteristics of graphenes. Only single-layer reduced graphene oxide exhibits ambipolar properties. The interaction of electron donor and acceptor molecules with few-layer graphene samples is examined in detail.
Resumo:
The effect of using a spatially smoothed forward-backward covariance matrix on the performance of weighted eigen-based state space methods/ESPRIT, and weighted MUSIC for direction-of-arrival (DOA) estimation is analyzed. Expressions for the mean-squared error in the estimates of the signal zeros and the DOA estimates, along with some general properties of the estimates and optimal weighting matrices, are derived. A key result is that optimally weighted MUSIC and weighted state-space methods/ESPRIT have identical asymptotic performance. Moreover, by properly choosing the number of subarrays, the performance of unweighted state space methods can be significantly improved. It is also shown that the mean-squared error in the DOA estimates is independent of the exact distribution of the source amplitudes. This results in a unified framework for dealing with DOA estimation using a uniformly spaced linear sensor array and the time series frequency estimation problems.
Resumo:
In the direction of arrival (DOA) estimation problem, we encounter both finite data and insufficient knowledge of array characterization. It is therefore important to study how subspace-based methods perform in such conditions. We analyze the finite data performance of the multiple signal classification (MUSIC) and minimum norm (min. norm) methods in the presence of sensor gain and phase errors, and derive expressions for the mean square error (MSE) in the DOA estimates. These expressions are first derived assuming an arbitrary array and then simplified for the special case of an uniform linear array with isotropic sensors. When they are further simplified for the case of finite data only and sensor errors only, they reduce to the recent results given in [9-12]. Computer simulations are used to verify the closeness between the predicted and simulated values of the MSE.
Resumo:
A new throttling system far SI engines is examined. The SMD of the fuel droplets in the induction system is measured to evaluate the performance of the new device with respect to the conventional throttle plate arrangement. The measurements are conducted at steady now conditions. A forward angular scattering technique with a He-Ne laser beam is used for droplet size measurement. The experiments are carried out with different mixture strength, stream velocity and throttle positions. It is observed that A/F ratio has no effect on SMD. However, stream velocity and throttle position have a significant influence on SMD. The new throttling method is found to be more effective in reducing the SMD, particularly at low throttle opening and high stream velocity compared to the conventional throttle plate.