951 resultados para exceedance probabilities
Resumo:
"Addenda" Dec. 1953, tipped in.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
"Notes and references": p. 168-172.
Resumo:
Mode of access: Internet.
Resumo:
Landslides often occur on slopes rendered unstable by underlying geology, geomorphology, hydrology, weather-climate, slope modifications, or deforestation. Unfortunately, humans commonly exacerbate such unstable conditions through careless or imprudent development practices. Due to local geology, geography, and climatic conditions, Puget Sound of western Washington State is especially landslide-prone. Despite this known issue, detailed analyses of landslide risks for specific communities are few. This study aims to classify areas of high landslide risk on the westerly bluffs of the 7.5 minute Freeland quadrangle based on a combined approach: mapping using LiDAR imagery and the Landform Remote Identification Model (LRIM) to identify landslides, and implementation of the Shallow Slope Stability Model (SHALSTAB) to establish a landslide exceedance probability. The objective is to produce a risk assessment from two shallow landslide scenarios: (1) minimum bluff setback and runout and (2) maximum bluff setback and runout. A simple risk equation that takes into account the probability of hazard occurrence with physical and economic vulnerability (van Westen, 2004) was applied to both scenarios. Results indicate an possible total loss as much as $32.6b from shallow landslides, given a setback of 12 m and a runout of 235 m.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Most of the modem developments with classification trees are aimed at improving their predictive capacity. This article considers a curiously neglected aspect of classification trees, namely the reliability of predictions that come from a given classification tree. In the sense that a node of a tree represents a point in the predictor space in the limit, the aim of this article is the development of localized assessment of the reliability of prediction rules. A classification tree may be used either to provide a probability forecast, where for each node the membership probabilities for each class constitutes the prediction, or a true classification where each new observation is predictively assigned to a unique class. Correspondingly, two types of reliability measure will be derived-namely, prediction reliability and classification reliability. We use bootstrapping methods as the main tool to construct these measures. We also provide a suite of graphical displays by which they may be easily appreciated. In addition to providing some estimate of the reliability of specific forecasts of each type, these measures can also be used to guide future data collection to improve the effectiveness of the tree model. The motivating example we give has a binary response, namely the presence or absence of a species of Eucalypt, Eucalyptus cloeziana, at a given sampling location in response to a suite of environmental covariates, (although the methods are not restricted to binary response data).
Resumo:
Spatial characterization of non-Gaussian attributes in earth sciences and engineering commonly requires the estimation of their conditional distribution. The indicator and probability kriging approaches of current nonparametric geostatistics provide approximations for estimating conditional distributions. They do not, however, provide results similar to those in the cumbersome implementation of simultaneous cokriging of indicators. This paper presents a new formulation termed successive cokriging of indicators that avoids the classic simultaneous solution and related computational problems, while obtaining equivalent results to the impractical simultaneous solution of cokriging of indicators. A successive minimization of the estimation variance of probability estimates is performed, as additional data are successively included into the estimation process. In addition, the approach leads to an efficient nonparametric simulation algorithm for non-Gaussian random functions based on residual probabilities.
Resumo:
We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.
Resumo:
The birth, death and catastrophe process is an extension of the birth-death process that incorporates the possibility of reductions in population of arbitrary size. We will consider a general form of this model in which the transition rates are allowed to depend on the current population size in an arbitrary manner. The linear case, where the transition rates are proportional to current population size, has been studied extensively. In particular, extinction probabilities, the expected time to extinction, and the distribution of the population size conditional on nonextinction (the quasi-stationary distribution) have all been evaluated explicitly. However, whilst these characteristics are of interest in the modelling and management of populations, processes with linear rate coefficients represent only a very limited class of models. We address this limitation by allowing for a wider range of catastrophic events. Despite this generalisation, explicit expressions can still be found for the expected extinction times.
Resumo:
The purpose of this analysis is threefold: first, to extract from the literature, current levels of GP detection of at-risk drinking by their patients, rates at which general practitioners (GPs) offer an intervention; and the effectiveness of these interventions; secondly, to develop a model based on this literature to be used in conjunction with scenario analysis; and thirdly, to consider the cost implications of current efforts and various scenarios. This study deals specifically with Australian general practice. A two-step procedure is used in the scenario analysis, which involves identifying opportunities for detection, intervention, effectiveness and assigning probabilities to outcomes. The results suggest that increasing rates of GP intervention achieves greatest benefit and return on resource use. For every 5% point increase in the rate of GP intervention, an additional 26 754 at-risk drinkers modify their drinking behaviour at a cost of $231.45 per patient. This compares with a cost per patient modifying drinking behaviour of $232.60 and $208.31 for every 5% point increase in the rates of detection and effectiveness, respectively. The knowledge, skill and attitude of practitioners toward drinking are significant, and they can be the prime motivators in persuading their patients to modify drinking behaviour.