23 resultados para Accident insurance.
Resumo:
Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and reinsurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland, in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than most commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module, and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge-corrected rainfall radar, meteorological reanalysis data (European Centre for Medium-Range Weather Forecasts Reanalysis-Interim; ERA-Interim) and a satellite rainfall product (The Climate Prediction Center morphing method; CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find the loss estimates to be more sensitive to uncertainties propagated from the driving precipitation data sets than to other uncertainties in the hazard and vulnerability modules, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.
Resumo:
COCO-2 is a model for assessing the potential economic costs likely to arise off-site following an accident at a nuclear reactor. COCO-2 builds on work presented in the model COCO-1 developed in 1991 by considering economic effects in more detail, and by including more sources of loss. Of particular note are: the consideration of the directly affected local economy, indirect losses that stem from the directly affected businesses, losses due to changes in tourism consumption, integration with the large body of work on recovery after an accident and a more systematic approach to health costs. The work, where possible, is based on official data sources for reasons of traceability, maintenance and ease of future development. This report describes the methodology and discusses the results of an example calculation. Guidance on how the base economic data can be updated in the future is also provided.
Resumo:
Data from four experimental research projects are presented which have in common that unexpected results caused a change in direction of the research. A plant growth accelerator caused the appearance of white black bean aphids, a synthetic pyrethroid suspected of enhancing aphid reproduction proved to enhance plant growth, a chance conversation with a colleague initiated a search for fungal DNA in aphids, and the accidental invasion of aphid cultures by a parasitoid reversed the aphid population ranking of two Brussels sprout cultivars. This last result led to a whole series of studies on the plant odour preferences of emerging parasitoids which in turn revealed the unexpected phenomenon that chemical cues to the maternal host plant are left with the eggs at oviposition. It is pointed out that, too often, researchers fail to follow up unexpected results because they resist accepting flaws in their hypotheses; also that current application criteria for research funding make it hard to accommodate unexpected findings.
Resumo:
Widespread commercial use of the internet has significantly increased the volume and scope of data being collected by organisations. ‘Big data’ has emerged as a term to encapsulate both the technical and commercial aspects of this growing data collection activity. To date, much of the discussion of big data has centred upon its transformational potential for innovation and efficiency, yet there has been less reflection on its wider implications beyond commercial value creation. This paper builds upon normal accident theory (NAT) to analyse the broader ethical implications of big data. It argues that the strategies behind big data require organisational systems that leave them vulnerable to normal accidents, that is to say some form of accident or disaster that is both unanticipated and inevitable. Whilst NAT has previously focused on the consequences of physical accidents, this paper suggests a new form of system accident that we label data accidents. These have distinct, less tangible and more complex characteristics and raise significant questions over the role of individual privacy in a ‘data society’. The paper concludes by considering the ways in which the risks of such data accidents might be managed or mitigated.
Resumo:
Lack of access to insurance exacerbates the impact of climate variability on smallholder famers in Africa. Unlike traditional insurance, which compensates proven agricultural losses, weather index insurance (WII) pays out in the event that a weather index is breached. In principle, WII could be provided to farmers throughout Africa. There are two data-related hurdles to this. First, most farmers do not live close enough to a rain gauge with sufficiently long record of observations. Second, mismatches between weather indices and yield may expose farmers to uncompensated losses, and insurers to unfair payouts – a phenomenon known as basis risk. In essence, basis risk results from complexities in the progression from meteorological drought (rainfall deficit) to agricultural drought (low soil moisture). In this study, we use a land-surface model to describe the transition from meteorological to agricultural drought. We demonstrate that spatial and temporal aggregation of rainfall results in a clearer link with soil moisture, and hence a reduction in basis risk. We then use an advanced statistical method to show how optimal aggregation of satellite-based rainfall estimates can reduce basis risk, enabling remotely sensed data to be utilized robustly for WII.
Resumo:
Remotely sensed rainfall is increasingly being used to manage climate-related risk in gauge sparse regions. Applications based on such data must make maximal use of the skill of the methodology in order to avoid doing harm by providing misleading information. This is especially challenging in regions, such as Africa, which lack gauge data for validation. In this study, we show how calibrated ensembles of equally likely rainfall can be used to infer uncertainty in remotely sensed rainfall estimates, and subsequently in assessment of drought. We illustrate the methodology through a case study of weather index insurance (WII) in Zambia. Unlike traditional insurance, which compensates proven agricultural losses, WII pays out in the event that a weather index is breached. As remotely sensed rainfall is used to extend WII schemes to large numbers of farmers, it is crucial to ensure that the indices being insured are skillful representations of local environmental conditions. In our study we drive a land surface model with rainfall ensembles, in order to demonstrate how aggregation of rainfall estimates in space and time results in a clearer link with soil moisture, and hence a truer representation of agricultural drought. Although our study focuses on agricultural insurance, the methodological principles for application design are widely applicable in Africa and elsewhere.