8 resultados para sensemaking of risk
em Indian Institute of Science - Bangalore - Índia
Resumo:
Methodologies are presented for minimization of risk in a river water quality management problem. A risk minimization model is developed to minimize the risk of low water quality along a river in the face of conflict among various stake holders. The model consists of three parts: a water quality simulation model, a risk evaluation model with uncertainty analysis and an optimization model. Sensitivity analysis, First Order Reliability Analysis (FORA) and Monte-Carlo simulations are performed to evaluate the fuzzy risk of low water quality. Fuzzy multiobjective programming is used to formulate the multiobjective model. Probabilistic Global Search Laussane (PGSL), a global search algorithm developed recently, is used for solving the resulting non-linear optimization problem. The algorithm is based on the assumption that better sets of points are more likely to be found in the neighborhood of good sets of points, therefore intensifying the search in the regions that contain good solutions. Another model is developed for risk minimization, which deals with only the moments of the generated probability density functions of the water quality indicators. Suitable skewness values of water quality indicators, which lead to low fuzzy risk are identified. Results of the models are compared with the results of a deterministic fuzzy waste load allocation model (FWLAM), when methodologies are applied to the case study of Tunga-Bhadra river system in southern India, with a steady state BOD-DO model. The fractional removal levels resulting from the risk minimization model are slightly higher, but result in a significant reduction in risk of low water quality. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The Kachchh region of Gujarat, India bore the brunt of a disastrous earthquake of magnitude M-w=7.6 that occurred on January 26, 2001. The major cause of failure of various structures including earthen dams was noted to be the presence of liquefiable alluvium in the foundation soil. Results of back-analysis of failures of Chang, Tappar, Kaswati and Rudramata earth dams using pseudo-static limit equilibrium approach presented in this paper confirm that the presence of liquefiable layer contributed to lesser factors of safety leading to a base type of failure that was also observed in the field. Following the earthquake, earth dams have been rehabilitated by the concerned authority and it is imperative that the reconstructed sections of earth dams be reanalyzed. It is also increasingly realized that risk assessment of dams in view of the large-scale investment made and probabilistic analysis is necessary. In this study, it is demonstrated that the probabilistic approach when used in conjunction with deterministic approach helps in providing a rational solution for quantification of safety of the dam and in the estimation of risk associated with the dam construction. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The goal of speech enhancement algorithms is to provide an estimate of clean speech starting from noisy observations. The often-employed cost function is the mean square error (MSE). However, the MSE can never be computed in practice. Therefore, it becomes necessary to find practical alternatives to the MSE. In image denoising problems, the cost function (also referred to as risk) is often replaced by an unbiased estimator. Motivated by this approach, we reformulate the problem of speech enhancement from the perspective of risk minimization. Some recent contributions in risk estimation have employed Stein's unbiased risk estimator (SURE) together with a parametric denoising function, which is a linear expansion of threshold/bases (LET). We show that the first-order case of SURE-LET results in a Wiener-filter type solution if the denoising function is made frequency-dependent. We also provide enhancement results obtained with both techniques and characterize the improvement by means of local as well as global SNR calculations.
Resumo:
In this paper, we explore noise-tolerant learning of classifiers. We formulate the problem as follows. We assume that there is an unobservable training set that is noise free. The actual training set given to the learning algorithm is obtained from this ideal data set by corrupting the class label of each example. The probability that the class label of an example is corrupted is a function of the feature vector of the example. This would account for most kinds of noisy data one encounters in practice. We say that a learning method is noise tolerant if the classifiers learnt with noise-free data and with noisy data, both have the same classification accuracy on the noise-free data. In this paper, we analyze the noise-tolerance properties of risk minimization (under different loss functions). We show that risk minimization under 0-1 loss function has impressive noise-tolerance properties and that under squared error loss is tolerant only to uniform noise; risk minimization under other loss functions is not noise tolerant. We conclude this paper with some discussion on the implications of these theoretical results.
Resumo:
Carbon dioxide emissions from the burning of coal, oil, and gas are increasing atmospheric carbon dioxide concentrations. These increased concentrations cause additional energy to be retained in Earth's climate system, thus increasing Earth's temperature. Various methods have been proposed to prevent this temperature increase either by reflecting to space sunlight that would otherwise warm Earth or by removing carbon dioxide from the atmosphere. Such intentional alteration of planetary-scale processes has been termed geoengineering. The first category of geoengineering method, solar geoengineering (also known as solar radiation management, or SRM), raises novel global-scale governance and environmental issues. Some SRM approaches are thought to be low in cost, so the scale of SRM deployment will likely depend primarily on considerations of risk. The second category of geoengineering method, carbon dioxide removal (CDR), raises issues related primarily to scale, cost, effectiveness, and local environmental consequences. The scale of CDR deployment will likely depend primarily on cost.
Resumo:
Load and resistance factor design (LRFD) approach for the design of reinforced soil walls is presented to produce designs with consistent and uniform levels of risk for the whole range of design applications. The evaluation of load and resistance factors for the reinforced soil walls based on reliability theory is presented. A first order reliability method (FORM) is used to determine appropriate ranges for the values of the load and resistance factors. Using pseudo-static limit equilibrium method, analysis is conducted to evaluate the external stability of reinforced soil walls subjected to earthquake loading. The potential failure mechanisms considered in the analysis are sliding failure, eccentricity failure of resultant force (or overturning failure) and bearing capacity failure. The proposed procedure includes the variability associated with reinforced backfill, retained backfill, foundation soil, horizontal seismic acceleration and surcharge load acting on the wall. Partial factors needed to maintain the stability against three modes of failure by targeting component reliability index of 3.0 are obtained for various values of coefficients of variation (COV) of friction angle of backfill and foundation soil, distributed dead load surcharge, cohesion of the foundation soil and horizontal seismic acceleration. A comparative study between LRFD and allowable stress design (ASD) is also presented with a design example. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
This article presents frequentist inference of accelerated life test data of series systems with independent log-normal component lifetimes. The means of the component log-lifetimes are assumed to depend on the stress variables through a linear stress translation function that can accommodate the standard stress translation functions in the literature. An expectation-maximization algorithm is developed to obtain the maximum likelihood estimates of model parameters. The maximum likelihood estimates are then further refined by bootstrap, which is also used to infer about the component and system reliability metrics at usage stresses. The developed methodology is illustrated by analyzing a real as well as a simulated dataset. A simulation study is also carried out to judge the effectiveness of the bootstrap. It is found that in this model, application of bootstrap results in significant improvement over the simple maximum likelihood estimates.
Resumo:
In many applications, the training data, from which one needs to learn a classifier, is corrupted with label noise. Many standard algorithms such as SVM perform poorly in the presence of label noise. In this paper we investigate the robustness of risk minimization to label noise. We prove a sufficient condition on a loss function for the risk minimization under that loss to be tolerant to uniform label noise. We show that the 0-1 loss, sigmoid loss, ramp loss and probit loss satisfy this condition though none of the standard convex loss functions satisfy it. We also prove that, by choosing a sufficiently large value of a parameter in the loss function, the sigmoid loss, ramp loss and probit loss can be made tolerant to nonuniform label noise also if we can assume the classes to be separable under noise-free data distribution. Through extensive empirical studies, we show that risk minimization under the 0-1 loss, the sigmoid loss and the ramp loss has much better robustness to label noise when compared to the SVM algorithm. (C) 2015 Elsevier B.V. All rights reserved.