94 resultados para Reliability allocation
Resumo:
The problem of updating the reliability of instrumented structures based on measured response under random dynamic loading is considered. A solution strategy within the framework of Monte Carlo simulation based dynamic state estimation method and Girsanov’s transformation for variance reduction is developed. For linear Gaussian state space models, the solution is developed based on continuous version of the Kalman filter, while, for non-linear and (or) non-Gaussian state space models, bootstrap particle filters are adopted. The controls to implement the Girsanov transformation are developed by solving a constrained non-linear optimization problem. Numerical illustrations include studies on a multi degree of freedom linear system and non-linear systems with geometric and (or) hereditary non-linearities and non-stationary random excitations.
Resumo:
Femtocells are a new concept which improves the coverage and capacity of a cellular system. We consider the problem of channel allocation and power control to different users within a Femtocell. Knowing the channels available, the channel states and the rate requirements of different users the Femtocell base station (FBS), allocates the channels to different users to satisfy their requirements. Also, the Femtocell should use minimal power so as to cause least interference to its neighboring Femtocells and outside users. We develop efficient, low complexity algorithms which can be used online by the Femtocell. The users may want to transmit data or voice. We compare our algorithms with the optimal solutions.
Resumo:
In this paper, an approach for target component and system reliability-based design optimisation (RBDO) to evaluate safety for the internal seismic stability of geosynthetic-reinforced soil (GRS) structures is presented. Three modes of failure are considered: tension failure of the bottom-most layer of reinforcement, pullout failure of the topmost layer of reinforcement, and total pullout failure of all reinforcement layers. The analysis is performed by treating backfill properties, geometric and strength properties of reinforcement as random variables. The optimum number of reinforcement layers and optimum pullout length needed to maintain stability against tension failure, pullout failure and total pullout failure for different coefficients of variation of friction angle of the backfill, design strength of the reinforcement and horizontal seismic acceleration coefficients by targeting various system reliability indices are proposed. The results provide guidelines for the total length of reinforcement required, considering the variability of backfill as well as seismic coefficients. One illustrative example is presented to explain the evaluation of reliability for internal stability of reinforced soil structures using the proposed approach. In the second illustration (the stability of five walls), the Kushiro wall subjected to the Kushiro-Oki earthquake, the Seiken wall subjected to the Chiba-ken Toho-Oki earthquake, the Ta Kung wall subjected to the Ji-Ji earthquake, and the Gould and Valencia walls subjected to Northridge earthquake are re-examined.
Resumo:
The problem of time variant reliability analysis of randomly parametered and randomly driven nonlinear vibrating systems is considered. The study combines two Monte Carlo variance reduction strategies into a single framework to tackle the problem. The first of these strategies is based on the application of the Girsanov transformation to account for the randomness in dynamic excitations, and the second approach is fashioned after the subset simulation method to deal with randomness in system parameters. Illustrative examples include study of single/multi degree of freedom linear/non-linear inelastic randomly parametered building frame models driven by stationary/non-stationary, white/filtered white noise support acceleration. The estimated reliability measures are demonstrated to compare well with results from direct Monte Carlo simulations. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Multi-GPU machines are being increasingly used in high-performance computing. Each GPU in such a machine has its own memory and does not share the address space either with the host CPU or other GPUs. Hence, applications utilizing multiple GPUs have to manually allocate and manage data on each GPU. Existing works that propose to automate data allocations for GPUs have limitations and inefficiencies in terms of allocation sizes, exploiting reuse, transfer costs, and scalability. We propose a scalable and fully automatic data allocation and buffer management scheme for affine loop nests on multi-GPU machines. We call it the Bounding-Box-based Memory Manager (BBMM). BBMM can perform at runtime, during standard set operations like union, intersection, and difference, finding subset and superset relations on hyperrectangular regions of array data (bounding boxes). It uses these operations along with some compiler assistance to identify, allocate, and manage data required by applications in terms of disjoint bounding boxes. This allows it to (1) allocate exactly or nearly as much data as is required by computations running on each GPU, (2) efficiently track buffer allocations and hence maximize data reuse across tiles and minimize data transfer overhead, and (3) and as a result, maximize utilization of the combined memory on multi-GPU machines. BBMM can work with any choice of parallelizing transformations, computation placement, and scheduling schemes, whether static or dynamic. Experiments run on a four-GPU machine with various scientific programs showed that BBMM reduces data allocations on each GPU by up to 75% compared to current allocation schemes, yields performance of at least 88% of manually written code, and allows excellent weak scaling.
Resumo:
Load and resistance factor design (LRFD) approach for the design of reinforced soil walls is presented to produce designs with consistent and uniform levels of risk for the whole range of design applications. The evaluation of load and resistance factors for the reinforced soil walls based on reliability theory is presented. A first order reliability method (FORM) is used to determine appropriate ranges for the values of the load and resistance factors. Using pseudo-static limit equilibrium method, analysis is conducted to evaluate the external stability of reinforced soil walls subjected to earthquake loading. The potential failure mechanisms considered in the analysis are sliding failure, eccentricity failure of resultant force (or overturning failure) and bearing capacity failure. The proposed procedure includes the variability associated with reinforced backfill, retained backfill, foundation soil, horizontal seismic acceleration and surcharge load acting on the wall. Partial factors needed to maintain the stability against three modes of failure by targeting component reliability index of 3.0 are obtained for various values of coefficients of variation (COV) of friction angle of backfill and foundation soil, distributed dead load surcharge, cohesion of the foundation soil and horizontal seismic acceleration. A comparative study between LRFD and allowable stress design (ASD) is also presented with a design example. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
We address the problem of passive eavesdroppers in multi-hop wireless networks using the technique of friendly jamming. The network is assumed to employ Decode and Forward (DF) relaying. Assuming the availability of perfect channel state information (CSI) of legitimate nodes and eavesdroppers, we consider a scheduling and power allocation (PA) problem for a multiple-source multiple-sink scenario so that eavesdroppers are jammed, and source-destination throughput targets are met while minimizing the overall transmitted power. We propose activation sets (AS-es) for scheduling, and formulate an optimization problem for PA. Several methods for finding AS-es are discussed and compared. We present an approximate linear program for the original nonlinear, non-convex PA optimization problem, and argue that under certain conditions, both the formulations produce identical results. In the absence of eavesdroppers' CSI, we utilize the notion of Vulnerability Region (VR), and formulate an optimization problem with the objective of minimizing the VR. Our results show that the proposed solution can achieve power-efficient operation while defeating eavesdroppers and achieving desired source-destination throughputs simultaneously. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Most of the cities in India are undergoing rapid development in recent decades, and many rural localities are undergoing transformation to urban hotspots. These developments have associated land use/land cover (LULC) change that effects runoff response from catchments, which is often evident in the form of increase in runoff peaks, volume and velocity in drain network. Often most of the existing storm water drains are in dilapidated stage owing to improper maintenance or inadequate design. The drains are conventionally designed using procedures that are based on some anticipated future conditions. Further, values of parameters/variables associated with design of the network are traditionally considered to be deterministic. However, in reality, the parameters/variables have uncertainty due to natural and/or inherent randomness. There is a need to consider the uncertainties for designing a storm water drain network that can effectively convey the discharge. The present study evaluates performance of an existing storm water drain network in Bangalore, India, through reliability analysis by Advance First Order Second Moment (AFOSM) method. In the reliability analysis, parameters that are considered to be random variables are roughness coefficient, slope and conduit dimensions. Performance of the existing network is evaluated considering three failure modes. The first failure mode occurs when runoff exceeds capacity of the storm water drain network, while the second failure mode occurs when the actual flow velocity in the storm water drain network exceeds the maximum allowable velocity for erosion control, whereas the third failure mode occurs when the minimum flow velocity is less than the minimum allowable velocity for deposition control. In the analysis, runoff generated from subcatchments of the study area and flow velocity in storm water drains are estimated using Storm Water Management Model (SWMM). Results from the study are presented and discussed. The reliability values are low under the three failure modes, indicating a need to redesign several of the conduits to improve their reliability. This study finds use in devising plans for expansion of the Bangalore storm water drain system. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
The problem of estimation of the time-variant reliability of actively controlled structural dynamical systems under stochastic excitations is considered. Monte Carlo simulations, reinforced with Girsanov transformation-based sampling variance reduction, are used to tackle the problem. In this approach, the external excitations are biased by an additional artificial control force. The conflicting objectives of the two control forces-one designed to reduce structural responses and the other to promote limit-state violations (but to reduce sampling variance)-are noted. The control for variance reduction is fashioned after design-point oscillations based on a first-order reliability method. It is shown that for structures that are amenable to laboratory testing, the reliability can be estimated experimentally with reduced testing times by devising a procedure based on the ideas of the Girsanov transformation. Illustrative examples include studies on a building frame with a magnetorheologic damper-based isolation system subject to nonstationary random earthquake excitations. (C) 2014 American Society of Civil Engineers.
Resumo:
Monte Carlo simulation methods involving splitting of Markov chains have been used in evaluation of multi-fold integrals in different application areas. We examine in this paper the performance of these methods in the context of evaluation of reliability integrals from the point of view of characterizing the sampling fluctuations. The methods discussed include the Au-Beck subset simulation, Holmes-Diaconis-Ross method, and generalized splitting algorithm. A few improvisations based on first order reliability method are suggested to select algorithmic parameters of the latter two methods. The bias and sampling variance of the alternative estimators are discussed. Also, an approximation to the sampling distribution of some of these estimators is obtained. Illustrative examples involving component and series system reliability analyses are presented with a view to bring out the relative merits of alternative methods. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10 degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four post Lest rig. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
We consider optimal average power allocation policies in a wireless channel in the presence of individual delay constraints on the transmitted packets. Power is consumed in transmission of data only. We consider the case when the power used in transmission is a linear function of the data transmitted. The transmission channel may experience multipath fading. We have developed a computationally efficient online algorithm, when there is same hard delay constraint for all packets. Later on, we generalize it to the case when there are multiple real time streams with different hard deadline constraints. Our algorithm uses linear programming and has very low complexity.
Resumo:
Granular filters are provided for the safety of water retaining structure for protection against piping failure. The phenomenon of piping triggers when the base soil to be protected starts migrating in the direction of seepage flow under the influence of seepage force. To protect base soil from migration, the voids in the filter media should be small enough but it should not also be too small to block smooth passage of seeping water. Fulfilling these two contradictory design requirements at the same time is a major concern for the successful performance of granular filter media. Since Terzaghi era, conventionally, particle size distribution (PSD) of granular filters is designed based on particle size distribution characteristics of the base soil to be protected. The design approach provides a range of D15f value in which the PSD of granular filter media should fall and there exist infinite possibilities. Further, safety against the two critical design requirements cannot be ensured. Although used successfully for many decades, the existing filter design guidelines are purely empirical in nature accompanied with experience and good engineering judgment. In the present study, analytical solutions for obtaining the factor of safety with respect to base soil particle migration and soil permeability consideration as proposed by the authors are first discussed. The solution takes into consideration the basic geotechnical properties of base soil and filter media as well as existing hydraulic conditions and provides a comprehensive solution to the granular filter design with ability to assess the stability in terms of factor of safety. Considering the fact that geotechnical properties are variable in nature, probabilistic analysis is further suggested to evaluate the system reliability of the filter media that may help in risk assessment and risk management for decision making.
Resumo:
The problem of cooperative beamforming for maximizing the achievable data rate of an energy constrained two-hop amplify-and-forward (AF) network is considered. Assuming perfect channel state information (CSI) of all the nodes, we evaluate the optimal scaling factor for the relay nodes. Along with individual power constraint on each of the relay nodes, we consider a weighted sum power constraint. The proposed iterative algorithm initially solves a set of relaxed problems with weighted sum power constraint and then updates the solution to accommodate individual constraints. These relaxed problems in turn are solved using a sequence of Quadratic Eigenvalue Problems (QEP). The key contribution of this letter is the generalization of cooperative beamforming to incorporate both the individual and weighted sum constraint. Furthermore, we have proposed a novel algorithm based on Quadratic Eigenvalue Problem (QEP) and discussed its convergence.
Resumo:
We consider near-optimal policies for a single user transmitting on a wireless channel which minimize average queue length under average power constraint. The power is consumed in transmission of data only. We consider the case when the power used in transmission is a linear function of the data transmitted. The transmission channel may experience multipath fading. Later, we also extend these results to the multiuser case. We show that our policies can be used in a system with energy harvesting sources at the transmitter. Next we consider data users which require minimum rate guarantees. Finally we consider the system which has both data and real time users. Our policies have low computational complexity, closed form expression for mean delays and require only the mean arrival rate with no queue length information.