212 resultados para Rayleigh Random Variables
Resumo:
Stochastic modelling is a useful way of simulating complex hard-rock aquifers as hydrological properties (permeability, porosity etc.) can be described using random variables with known statistics. However, very few studies have assessed the influence of topological uncertainty (i.e. the variability of thickness of conductive zones in the aquifer), probably because it is not easy to retrieve accurate statistics of the aquifer geometry, especially in hard rock context. In this paper, we assessed the potential of using geophysical surveys to describe the geometry of a hard rock-aquifer in a stochastic modelling framework. The study site was a small experimental watershed in South India, where the aquifer consisted of a clayey to loamy-sandy zone (regolith) underlain by a conductive fissured rock layer (protolith) and the unweathered gneiss (bedrock) at the bottom. The spatial variability of the thickness of the regolith and fissured layers was estimated by electrical resistivity tomography (ERT) profiles, which were performed along a few cross sections in the watershed. For stochastic analysis using Monte Carlo simulation, the generated random layer thickness was made conditional to the available data from the geophysics. In order to simulate steady state flow in the irregular domain with variable geometry, we used an isoparametric finite element method to discretize the flow equation over an unstructured grid with irregular hexahedral elements. The results indicated that the spatial variability of the layer thickness had a significant effect on reducing the simulated effective steady seepage flux and that using the conditional simulations reduced the uncertainty of the simulated seepage flux. As a conclusion, combining information on the aquifer geometry obtained from geophysical surveys with stochastic modelling is a promising methodology to improve the simulation of groundwater flow in complex hard-rock aquifers. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Impoverishment of particles, i.e. the discretely simulated sample paths of the process dynamics, poses a major obstacle in employing the particle filters for large dimensional nonlinear system identification. A known route of alleviating this impoverishment, i.e. of using an exponentially increasing ensemble size vis-a-vis the system dimension, remains computationally infeasible in most cases of practical importance. In this work, we explore the possibility of unscented transformation on Gaussian random variables, as incorporated within a scaled Gaussian sum stochastic filter, as a means of applying the nonlinear stochastic filtering theory to higher dimensional structural system identification problems. As an additional strategy to reconcile the evolving process dynamics with the observation history, the proposed filtering scheme also modifies the process model via the incorporation of gain-weighted innovation terms. The reported numerical work on the identification of structural dynamic models of dimension up to 100 is indicative of the potential of the proposed filter in realizing the stated aim of successfully treating relatively larger dimensional filtering problems. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
In contemporary wideband orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE) and WiMAX, different subcarriers over which a codeword is transmitted may experience different signal-to-noise-ratios (SNRs). Thus, adaptive modulation and coding (AMC) in these systems is driven by a vector of subcarrier SNRs experienced by the codeword, and is more involved. Exponential effective SNR mapping (EESM) simplifies the problem by mapping this vector into a single equivalent fiat-fading SNR. Analysis of AMC using EESM is challenging owing to its non-linear nature and its dependence on the modulation and coding scheme. We first propose a novel statistical model for the EESM, which is based on the Beta distribution. It is motivated by the central limit approximation for random variables with a finite support. It is simpler and as accurate as the more involved ad hoc models proposed earlier. Using it, we develop novel expressions for the throughput of a point-to-point OFDM link with multi-antenna diversity that uses EESM for AMC. We then analyze a general, multi-cell OFDM deployment with co-channel interference for various frequency-domain schedulers. Extensive results based on LTE and WiMAX are presented to verify the model and analysis, and gain new insights.
Resumo:
In this paper, an approach for target component and system reliability-based design optimisation (RBDO) to evaluate safety for the internal seismic stability of geosynthetic-reinforced soil (GRS) structures is presented. Three modes of failure are considered: tension failure of the bottom-most layer of reinforcement, pullout failure of the topmost layer of reinforcement, and total pullout failure of all reinforcement layers. The analysis is performed by treating backfill properties, geometric and strength properties of reinforcement as random variables. The optimum number of reinforcement layers and optimum pullout length needed to maintain stability against tension failure, pullout failure and total pullout failure for different coefficients of variation of friction angle of the backfill, design strength of the reinforcement and horizontal seismic acceleration coefficients by targeting various system reliability indices are proposed. The results provide guidelines for the total length of reinforcement required, considering the variability of backfill as well as seismic coefficients. One illustrative example is presented to explain the evaluation of reliability for internal stability of reinforced soil structures using the proposed approach. In the second illustration (the stability of five walls), the Kushiro wall subjected to the Kushiro-Oki earthquake, the Seiken wall subjected to the Chiba-ken Toho-Oki earthquake, the Ta Kung wall subjected to the Ji-Ji earthquake, and the Gould and Valencia walls subjected to Northridge earthquake are re-examined.
Resumo:
Consider a J-component series system which is put on Accelerated Life Test (ALT) involving K stress variables. First, a general formulation of ALT is provided for log-location-scale family of distributions. A general stress translation function of location parameter of the component log-lifetime distribution is proposed which can accommodate standard ones like Arrhenius, power-rule, log-linear model, etc., as special cases. Later, the component lives are assumed to be independent Weibull random variables with a common shape parameter. A full Bayesian methodology is then developed by letting only the scale parameters of the Weibull component lives depend on the stress variables through the general stress translation function. Priors on all the parameters, namely the stress coefficients and the Weibull shape parameter, are assumed to be log-concave and independent of each other. This assumption is to facilitate Gibbs sampling from the joint posterior. The samples thus generated from the joint posterior is then used to obtain the Bayesian point and interval estimates of the system reliability at usage condition.
Resumo:
Consider a J-component series system which is put on Accelerated Life Test (ALT) involving K stress variables. First, a general formulation of ALT is provided for log-location-scale family of distributions. A general stress translation function of location parameter of the component log-lifetime distribution is proposed which can accommodate standard ones like Arrhenius, power-rule, log-linear model, etc., as special cases. Later, the component lives are assumed to be independent Weibull random variables with a common shape parameter. A full Bayesian methodology is then developed by letting only the scale parameters of the Weibull component lives depend on the stress variables through the general stress translation function. Priors on all the parameters, namely the stress coefficients and the Weibull shape parameter, are assumed to be log-concave and independent of each other. This assumption is to facilitate Gibbs sampling from the joint posterior. The samples thus generated from the joint posterior is then used to obtain the Bayesian point and interval estimates of the system reliability at usage condition.
Resumo:
Most of the cities in India are undergoing rapid development in recent decades, and many rural localities are undergoing transformation to urban hotspots. These developments have associated land use/land cover (LULC) change that effects runoff response from catchments, which is often evident in the form of increase in runoff peaks, volume and velocity in drain network. Often most of the existing storm water drains are in dilapidated stage owing to improper maintenance or inadequate design. The drains are conventionally designed using procedures that are based on some anticipated future conditions. Further, values of parameters/variables associated with design of the network are traditionally considered to be deterministic. However, in reality, the parameters/variables have uncertainty due to natural and/or inherent randomness. There is a need to consider the uncertainties for designing a storm water drain network that can effectively convey the discharge. The present study evaluates performance of an existing storm water drain network in Bangalore, India, through reliability analysis by Advance First Order Second Moment (AFOSM) method. In the reliability analysis, parameters that are considered to be random variables are roughness coefficient, slope and conduit dimensions. Performance of the existing network is evaluated considering three failure modes. The first failure mode occurs when runoff exceeds capacity of the storm water drain network, while the second failure mode occurs when the actual flow velocity in the storm water drain network exceeds the maximum allowable velocity for erosion control, whereas the third failure mode occurs when the minimum flow velocity is less than the minimum allowable velocity for deposition control. In the analysis, runoff generated from subcatchments of the study area and flow velocity in storm water drains are estimated using Storm Water Management Model (SWMM). Results from the study are presented and discussed. The reliability values are low under the three failure modes, indicating a need to redesign several of the conduits to improve their reliability. This study finds use in devising plans for expansion of the Bangalore storm water drain system. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
We consider a server serving a time-slotted queued system of multiple packet-based flows, where not more than one flow can be serviced in a single time slot. The flows have exogenous packet arrivals and time-varying service rates. At each time, the server can observe instantaneous service rates for only a subset of flows ( selected from a fixed collection of observable subsets) before scheduling a flow in the subset for service. We are interested in queue length aware scheduling to keep the queues short. The limited availability of instantaneous service rate information requires the scheduler to make a careful choice of which subset of service rates to sample. We develop scheduling algorithms that use only partial service rate information from subsets of channels, and that minimize the likelihood of queue overflow in the system. Specifically, we present a new joint subset-sampling and scheduling algorithm called Max-Exp that uses only the current queue lengths to pick a subset of flows, and subsequently schedules a flow using the Exponential rule. When the collection of observable subsets is disjoint, we show that Max-Exp achieves the best exponential decay rate, among all scheduling algorithms that base their decision on the current ( or any finite past history of) system state, of the tail of the longest queue. To accomplish this, we employ novel analytical techniques for studying the performance of scheduling algorithms using partial state, which may be of independent interest. These include new sample-path large deviations results for processes obtained by non-random, predictable sampling of sequences of independent and identically distributed random variables. A consequence of these results is that scheduling with partial state information yields a rate function significantly different from scheduling with full channel information. In the special case when the observable subsets are singleton flows, i.e., when there is effectively no a priori channel state information, Max-Exp reduces to simply serving the flow with the longest queue; thus, our results show that to always serve the longest queue in the absence of any channel state information is large deviations optimal.
Resumo:
The study introduces two new alternatives for global response sensitivity analysis based on the application of the L-2-norm and Hellinger's metric for measuring distance between two probabilistic models. Both the procedures are shown to be capable of treating dependent non-Gaussian random variable models for the input variables. The sensitivity indices obtained based on the L2-norm involve second order moments of the response, and, when applied for the case of independent and identically distributed sequence of input random variables, it is shown to be related to the classical Sobol's response sensitivity indices. The analysis based on Hellinger's metric addresses variability across entire range or segments of the response probability density function. The measure is shown to be conceptually a more satisfying alternative to the Kullback-Leibler divergence based analysis which has been reported in the existing literature. Other issues addressed in the study cover Monte Carlo simulation based methods for computing the sensitivity indices and sensitivity analysis with respect to grouped variables. Illustrative examples consist of studies on global sensitivity analysis of natural frequencies of a random multi-degree of freedom system, response of a nonlinear frame, and safety margin associated with a nonlinear performance function. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Response analysis of a linear structure with uncertainties in both structural parameters and external excitation is considered here. When such an analysis is carried out using the spectral stochastic finite element method (SSFEM), often the computational cost tends to be prohibitive due to the rapid growth of the number of spectral bases with the number of random variables and the order of expansion. For instance, if the excitation contains a random frequency, or if it is a general random process, then a good approximation of these excitations using polynomial chaos expansion (PCE) involves a large number of terms, which leads to very high cost. To address this issue of high computational cost, a hybrid method is proposed in this work. In this method, first the random eigenvalue problem is solved using the weak formulation of SSFEM, which involves solving a system of deterministic nonlinear algebraic equations to estimate the PCE coefficients of the random eigenvalues and eigenvectors. Then the response is estimated using a Monte Carlo (MC) simulation, where the modal bases are sampled from the PCE of the random eigenvectors estimated in the previous step, followed by a numerical time integration. It is observed through numerical studies that this proposed method successfully reduces the computational burden compared with either a pure SSFEM of a pure MC simulation and more accurate than a perturbation method. The computational gain improves as the problem size in terms of degrees of freedom grows. It also improves as the timespan of interest reduces.
Resumo:
The stochastic version of Pontryagin's maximum principle is applied to determine an optimal maintenance policy of equipment subject to random deterioration. The deterioration of the equipment with age is modelled as a random process. Next the model is generalized to include random catastrophic failure of the equipment. The optimal maintenance policy is derived for two special probability distributions of time to failure of the equipment, namely, exponential and Weibull distributions Both the salvage value and deterioration rate of the equipment are treated as state variables and the maintenance as a control variable. The result is illustrated by an example
Resumo:
Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.
Resumo:
A technique is developed to study random vibration of nonlinear systems. The method is based on the assumption that the joint probability density function of the response variables and input variables is Gaussian. It is shown that this method is more general than the statistical linearization technique in that it can handle non-Gaussian excitations and amplitude-limited responses. As an example a bilinear hysteretic system under white noise excitation is analyzed. The prediction of various response statistics by this technique is in good agreement with other available results.
Resumo:
A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.
Resumo:
In order to understand the role of translational modes in the orientational relaxation in dense dipolar liquids, we have carried out a computer ''experiment'' where a random dipolar lattice was generated by quenching only the translational motion of the molecules of an equilibrated dipolar liquid. The lattice so generated was orientationally disordered and positionally random. The detailed study of orientational relaxation in this random dipolar lattice revealed interesting differences from those of the corresponding dipolar liquid. In particular, we found that the relaxation of the collective orientational correlation functions at the intermediate wave numbers was markedly slower at the long times for the random lattice than that of the liquid. This verified the important role of the translational modes in this regime, as predicted recently by the molecular theories. The single-particle orientational correlation functions of the random lattice also decayed significantly slowly at long times, compared to those of the dipolar liquid.