952 resultados para Ruin Probability
Resumo:
It is becoming increasingly popular to consider species interactions when managing ecological foodwebs. Such an approach is useful in determining how management can affect multiple species, with either beneficial or detrimental consequences. Identifying such actions is particularly valuable in the context of conservation decision making as funding is severely limited. This paper outlines a new approach that simplifies the resource allocation problem in a two species system for a range of species interactions: independent, mutualism, predator-prey, and competitive exclusion. We assume that both species are endangered and we do not account for decisions over time. We find that optimal funding allocation is to the conservation of the species with the highest marginal gain in expected probability of survival and that, across all except mutualist interaction types, optimal conservation funding allocation differs between species. Loss in efficiency from ignoring species interactions was most severe in predator-prey systems. The funding problem we address, where an ecosystem includes multiple threatened species, will only become more commonplace as increasing numbers of species worldwide become threatened. © 2011 Elsevier B.V.
Resumo:
Almost 10 years ago, Pullin and Knight (2001) called for an “effectiveness revolution in conservation” to be enabled by the systematic evaluation of evidence for conservation decision making. Drawing from the model used in clinicalmedicine, they outlined the concept of “evidencebased conservation” in which existing information, or evidence, from relevant and rigorous research is compiled and analyzed in a systematic manner to inform conservation actions (Cochrane 1972). The promise of evidencebased conservation has generated significant interest; 25 systematic reviews have been completed since 2004 and dozens are underway (Collaboration for Environmental Evidence 2010). However we argue that an “effectiveness revolution” (Pullin & Knight 2001) in conservation will not be possible unless mechanisms are devised for incorporating the growing evidence base into decision frameworks. For conservation professionals to accomplish the missions of their organizations they must demonstrate that their actions actually achieve objectives (Pullin & Knight 2009). Systematic evaluation provides a framework for objectively evaluating the effectiveness of actions. To leverage the benefit of these evaluations, we need resource-allocation systems that are responsive to their outcomes. The allocation of conservation resources is often the product of institutional priorities or reliance on intuition (Sutherland et al. 2004; Pullin & Knight 2005; Cook et al. 2010). We highlight the NICE technologyappraisal process because it provides an example of formal integration of systematic-evidence evaluation with provision of guidance for action. The transparent process, which clearly delineates costs and benefits of each alternative action, could also provide the public with new insight into the environmental effects of different decisions. This insight could stimulate a wider discussion about investment in conservation by demonstrating how changes in funding might affect the probability of achieving conservation objectives. ©2010 Society for Conservation Biology
Resumo:
The NLM stream cipher designed by Hoon Jae Lee, Sang Min Sung, Hyeong Rag Kim is a strengthened version of the LM summation generator that combines linear and non-linear feedback shift registers. In recent works, the NLM cipher has been used for message authentication in lightweight communication over wireless sensor networks and for RFID authentication protocols. The work analyses the security of the NLM stream cipher and the NLM-MAC scheme that is built on the top of the NLM cipher. We first show that the NLM cipher suffers from two major weaknesses that lead to key recovery and forgery attacks. We prove the internal state of the NLM cipher can be recovered with time complexity about nlog7×2, where the total length of internal state is 2⋅n+22⋅n+2 bits. The attack needs about n2n2 key-stream bits. We also show adversary is able to forge any MAC tag very efficiently by having only one pair (MAC tag, ciphertext). The proposed attacks are practical and break the scheme with a negligible error probability.
Resumo:
We consider online trading in a single security with the objective of getting rich when its price ever exhibits a large upcrossing, without risking bankruptcy. We investigate payoff guarantees that are expressed in terms of the extremity of the upcrossings. We obtain an exact and elegant characterisation of the guarantees that can be achieved. Moreover, we derive a simple canonical strategy for each attainable guarantee.
Resumo:
Models of the mammalian clock have traditionally been based around two feedback loops-the self-repression of Per/Cry by interfering with activation by BMAL/CLOCK, and the repression of Bmal/Clock by the REV-ERB proteins. Recent experimental evidence suggests that the D-box, a transcription factor binding site associated with daytime expression, plays a larger role in clock function than has previously been understood. We present a simplified clock model that highlights the role of the D-box and illustrate an approach for finding maximum-entropy ensembles of model parameters, given experimentally imposed constraints. Parameter variability can be mitigated using prior probability distributions derived from genome-wide studies of cellular kinetics. Our model reproduces predictions concerning the dual regulation of Cry1 by the D-box and Rev-ErbA/ROR response element (RRE) promoter elements and allows for ensemble-based predictions of phase response curves (PRCs). Nonphotic signals such as Neuropeptide Y (NPY) may act by promoting Cry1 expression, whereas photic signals likely act by stimulating expression from the E/E' box. Ensemble generation with parameter probability restraints reveals more about a model's behavior than a single optimal parameter set.
Resumo:
Objective To synthesise recent research on the use of machine learning approaches to mining textual injury surveillance data. Design Systematic review. Data sources The electronic databases which were searched included PubMed, Cinahl, Medline, Google Scholar, and Proquest. The bibliography of all relevant articles was examined and associated articles were identified using a snowballing technique. Selection criteria For inclusion, articles were required to meet the following criteria: (a) used a health-related database, (b) focused on injury-related cases, AND used machine learning approaches to analyse textual data. Methods The papers identified through the search were screened resulting in 16 papers selected for review. Articles were reviewed to describe the databases and methodology used, the strength and limitations of different techniques, and quality assurance approaches used. Due to heterogeneity between studies meta-analysis was not performed. Results Occupational injuries were the focus of half of the machine learning studies and the most common methods described were Bayesian probability or Bayesian network based methods to either predict injury categories or extract common injury scenarios. Models were evaluated through either comparison with gold standard data or content expert evaluation or statistical measures of quality. Machine learning was found to provide high precision and accuracy when predicting a small number of categories, was valuable for visualisation of injury patterns and prediction of future outcomes. However, difficulties related to generalizability, source data quality, complexity of models and integration of content and technical knowledge were discussed. Conclusions The use of narrative text for injury surveillance has grown in popularity, complexity and quality over recent years. With advances in data mining techniques, increased capacity for analysis of large databases, and involvement of computer scientists in the injury prevention field, along with more comprehensive use and description of quality assurance methods in text mining approaches, it is likely that we will see a continued growth and advancement in knowledge of text mining in the injury field.
Resumo:
Experts are increasingly being called upon to quantify their knowledge, particularly in situations where data is not yet available or of limited relevance. In many cases this involves asking experts to estimate probabilities. For example experts, in ecology or related fields, might be called upon to estimate probabilities of incidence or abundance of species, and how they relate to environmental factors. Although many ecologists undergo some training in statistics at undergraduate and postgraduate levels, this does not necessarily focus on interpretations of probabilities. More accurate elicitation can be obtained by training experts prior to elicitation, and if necessary tailoring elicitation to address the expert’s strengths and weaknesses. Here we address the first step of diagnosing conceptual understanding of probabilities. We refer to the psychological literature which identifies several common biases or fallacies that arise during elicitation. These form the basis for developing a diagnostic questionnaire, as a tool for supporting accurate elicitation, particularly when several experts or elicitors are involved. We report on a qualitative assessment of results from a pilot of this questionnaire. These results raise several implications for training experts, not only prior to elicitation, but more strategically by targeting them whilst still undergraduate or postgraduate students.
Resumo:
The purpose of this research is to assess daylight performance of buildings with climatic responsive envelopes with complex geometry that integrates shading devices in the façade. To this end two case studies are chosen due to their complex geometries and integrated daylight devices. The effect of different parameters of the daylight devices is analysed through Climate base daylight metrics.
Resumo:
A state-wide observational study was undertaken by the Centre for Accident Research and Road Safety – Queensland (CARRS-Q) in order to investigate the prevalence of unregistered vehicles on Queensland roads. This study was conducted on behalf of the Queensland department of Transport and Main Roads (TMR). This study builds upon research commissioned by Queensland Transport and conducted in 2000, 2003 and 2005. Vehicles were observed throughout Queensland from April 13th, 2010 to May 8th, 2010 in a mix of small rural towns, regional centres and metropolitan locations. The locations sampled for this study were restricted to destinations (e.g. shopping centres, hospitals, airports and park-and-ride facilities) rather than residential areas, and a variety of different destinations were sampled. Sampling vehicles in these types of locations provides a higher probability that the vehicles captured in the survey are being driven on a regular basis (Younglove, et al. 2004).
Resumo:
The mineral barahonaite is in all probability a member of the smolianinovite group. The mineral is an arsenate mineral formed as a secondary mineral in the oxidized zone of sulphide deposits. We have studied the barahonaite mineral using a combination of Raman and infrared spectroscopy. The mineral is characterized by a series of Raman bands at 863 cm−1 with low wavenumber shoulders at 802 and 828 cm−1. These bands are assigned to the arsenate and hydrogen arsenate stretching vibrations. The infrared spectrum shows a broad spectral profile. Two Raman bands at 506 and 529 cm−1 are assigned to the triply degenerate arsenate bending vibration (F 2, ν4), and the Raman bands at 325, 360, and 399 cm−1 are attributed to the arsenate ν2 bending vibration. Raman and infrared bands in the 2500–3800 cm−1 spectral range are assigned to water and hydroxyl stretching vibrations. The application of Raman spectroscopy to study the structure of barahonaite is better than infrared spectroscopy, probably because of the much higher spatial resolution.
Resumo:
An effective prognostics program will provide ample lead time for maintenance engineers to schedule a repair and to acquire replacement components before catastrophic failures occur. This paper presents a technique for accurate assessment of the remnant life of machines based on health state probability estimation technique. For comparative study of the proposed model with the proportional hazard model (PHM), experimental bearing failure data from an accelerated bearing test rig were used. The result shows that the proposed prognostic model based on health state probability estimation can provide a more accurate prediction capability than the commonly used PHM in bearing failure case study.
Resumo:
Drink driving incidents in the Australian community continue to be a major road safety problem resulting in a third of all fatalities. Drink driving prevalence remains high; with the rate of Australians who self report drink driving remaining at 11%-12.1% [1,2]. The focus of research in the area to date has been with recidivist offenders who have a higher probability of reoffending, while there is comparatively limited research regarding first time offenders. An important and understudied area relates to the characteristics of first offenders and predictors of recidivism. This study examined the findings of in-depth focussed interviews with a sample of 20 individual first time drink driving offenders in Queensland recruited at the time of court mention.
Resumo:
The use of expert knowledge to quantify a Bayesian Network (BN) is necessary when data is not available. This however raises questions regarding how opinions from multiple experts can be used in a BN. Linear pooling is a popular method for combining probability assessments from multiple experts. In particular, Prior Linear Pooling (PrLP), which pools opinions then places them into the BN is a common method. This paper firstly proposes an alternative pooling method, Posterior Linear Pooling (PoLP). This method constructs a BN for each expert, then pools the resulting probabilities at the nodes of interest. Secondly, it investigates the advantages and disadvantages of using these pooling methods to combine the opinions of multiple experts. Finally, the methods are applied to an existing BN, the Wayfinding Bayesian Network Model, to investigate the behaviour of different groups of people and how these different methods may be able to capture such differences. The paper focusses on 6 nodes Human Factors, Environmental Factors, Wayfinding, Communication, Visual Elements of Communication and Navigation Pathway, and three subgroups Gender (female, male),Travel Experience (experienced, inexperienced), and Travel Purpose (business, personal) and finds that different behaviors can indeed be captured by the different methods.
Resumo:
An increasing number of studies analyze the relationship between natural disaster damage and income levels, but they do not consider the distinction between public and private disaster mitigation. This paper empirically distinguishes these two types of mitigation using Japanese prefectural panel data from 1975 to 2007. Our results show that public mitigation rather than private mitigation has contributed to mitigating the total damage resulting from natural disasters. Our estimation of cost-benefit ratios for each prefecture confirms that the mitigation efforts of urban prefectures are less effective than those of rural prefectures in focusing on both large and frequent/small disasters. Hence, urban prefectures need to reassess their public mitigation measures. Furthermore, to lessen the damage resulting from extreme catastrophes, policy makers are required to invest in improved mitigation infrastructures when faced with a high probability of disasters.
Resumo:
A new transdimensional Sequential Monte Carlo (SMC) algorithm called SM- CVB is proposed. In an SMC approach, a weighted sample of particles is generated from a sequence of probability distributions which ‘converge’ to the target distribution of interest, in this case a Bayesian posterior distri- bution. The approach is based on the use of variational Bayes to propose new particles at each iteration of the SMCVB algorithm in order to target the posterior more efficiently. The variational-Bayes-generated proposals are not limited to a fixed dimension. This means that the weighted particle sets that arise can have varying dimensions thereby allowing us the option to also estimate an appropriate dimension for the model. This novel algorithm is outlined within the context of finite mixture model estimation. This pro- vides a less computationally demanding alternative to using reversible jump Markov chain Monte Carlo kernels within an SMC approach. We illustrate these ideas in a simulated data analysis and in applications.