932 resultados para Accident insurance.
Resumo:
This article examines the ways in which insurance companies modified their investment policies during the interwar years, arguing that this period marked the start of the transition from ‘traditional’ to ‘modern’ investment practice. Economic and financial conditions raised considerable doubts regarding the suitability of traditional insurance investments, while competitive conditions forced insurance offices to seek higher-yielding assets. These pressures led to a considerable increase in the proportion of new investment devoted to corporate securities, including ordinary shares. Meanwhile new insurance investment philosophies began to be advocated, which accorded both legitimacy and importance to the role of ordinary shares in insurance portfolios.
Resumo:
Accident and Emergency (A&E) units provide a route for patients requiring urgent admission to acute hospitals. Public concern over long waiting times for admissions motivated this study, whose aim is to explore the factors which contribute to such delays. The paper discusses the formulation and calibration of a system dynamics model of the interaction of demand pattern, A&E resource deployment, other hospital processes and bed numbers; and the outputs of policy analysis runs of the model which vary a number of the key parameters. Two significant findings have policy implications. One is that while some delays to patients are unavoidable, reductions can be achieved by selective augmentation of resources within, and relating to, the A&E unit. The second is that reductions in bed numbers do not increase waiting times for emergency admissions, their effect instead being to increase sharply the number of cancellations of admissions for elective surgery. This suggests that basing A&E policy solely on any single criterion will merely succeed in transferring the effects of a resource deficit to a different patient group.
Resumo:
Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and reinsurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland, in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than most commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module, and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge-corrected rainfall radar, meteorological reanalysis data (European Centre for Medium-Range Weather Forecasts Reanalysis-Interim; ERA-Interim) and a satellite rainfall product (The Climate Prediction Center morphing method; CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find the loss estimates to be more sensitive to uncertainties propagated from the driving precipitation data sets than to other uncertainties in the hazard and vulnerability modules, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.
Resumo:
COCO-2 is a model for assessing the potential economic costs likely to arise off-site following an accident at a nuclear reactor. COCO-2 builds on work presented in the model COCO-1 developed in 1991 by considering economic effects in more detail, and by including more sources of loss. Of particular note are: the consideration of the directly affected local economy, indirect losses that stem from the directly affected businesses, losses due to changes in tourism consumption, integration with the large body of work on recovery after an accident and a more systematic approach to health costs. The work, where possible, is based on official data sources for reasons of traceability, maintenance and ease of future development. This report describes the methodology and discusses the results of an example calculation. Guidance on how the base economic data can be updated in the future is also provided.
Resumo:
Data from four experimental research projects are presented which have in common that unexpected results caused a change in direction of the research. A plant growth accelerator caused the appearance of white black bean aphids, a synthetic pyrethroid suspected of enhancing aphid reproduction proved to enhance plant growth, a chance conversation with a colleague initiated a search for fungal DNA in aphids, and the accidental invasion of aphid cultures by a parasitoid reversed the aphid population ranking of two Brussels sprout cultivars. This last result led to a whole series of studies on the plant odour preferences of emerging parasitoids which in turn revealed the unexpected phenomenon that chemical cues to the maternal host plant are left with the eggs at oviposition. It is pointed out that, too often, researchers fail to follow up unexpected results because they resist accepting flaws in their hypotheses; also that current application criteria for research funding make it hard to accommodate unexpected findings.
Resumo:
Widespread commercial use of the internet has significantly increased the volume and scope of data being collected by organisations. ‘Big data’ has emerged as a term to encapsulate both the technical and commercial aspects of this growing data collection activity. To date, much of the discussion of big data has centred upon its transformational potential for innovation and efficiency, yet there has been less reflection on its wider implications beyond commercial value creation. This paper builds upon normal accident theory (NAT) to analyse the broader ethical implications of big data. It argues that the strategies behind big data require organisational systems that leave them vulnerable to normal accidents, that is to say some form of accident or disaster that is both unanticipated and inevitable. Whilst NAT has previously focused on the consequences of physical accidents, this paper suggests a new form of system accident that we label data accidents. These have distinct, less tangible and more complex characteristics and raise significant questions over the role of individual privacy in a ‘data society’. The paper concludes by considering the ways in which the risks of such data accidents might be managed or mitigated.
Resumo:
Lack of access to insurance exacerbates the impact of climate variability on smallholder famers in Africa. Unlike traditional insurance, which compensates proven agricultural losses, weather index insurance (WII) pays out in the event that a weather index is breached. In principle, WII could be provided to farmers throughout Africa. There are two data-related hurdles to this. First, most farmers do not live close enough to a rain gauge with sufficiently long record of observations. Second, mismatches between weather indices and yield may expose farmers to uncompensated losses, and insurers to unfair payouts – a phenomenon known as basis risk. In essence, basis risk results from complexities in the progression from meteorological drought (rainfall deficit) to agricultural drought (low soil moisture). In this study, we use a land-surface model to describe the transition from meteorological to agricultural drought. We demonstrate that spatial and temporal aggregation of rainfall results in a clearer link with soil moisture, and hence a reduction in basis risk. We then use an advanced statistical method to show how optimal aggregation of satellite-based rainfall estimates can reduce basis risk, enabling remotely sensed data to be utilized robustly for WII.
Resumo:
Remotely sensed rainfall is increasingly being used to manage climate-related risk in gauge sparse regions. Applications based on such data must make maximal use of the skill of the methodology in order to avoid doing harm by providing misleading information. This is especially challenging in regions, such as Africa, which lack gauge data for validation. In this study, we show how calibrated ensembles of equally likely rainfall can be used to infer uncertainty in remotely sensed rainfall estimates, and subsequently in assessment of drought. We illustrate the methodology through a case study of weather index insurance (WII) in Zambia. Unlike traditional insurance, which compensates proven agricultural losses, WII pays out in the event that a weather index is breached. As remotely sensed rainfall is used to extend WII schemes to large numbers of farmers, it is crucial to ensure that the indices being insured are skillful representations of local environmental conditions. In our study we drive a land surface model with rainfall ensembles, in order to demonstrate how aggregation of rainfall estimates in space and time results in a clearer link with soil moisture, and hence a truer representation of agricultural drought. Although our study focuses on agricultural insurance, the methodological principles for application design are widely applicable in Africa and elsewhere.
Resumo:
The current system of controlling oil spills involves a complex relationship of international, federal and state law, which has not proven to be very effective. The multiple layers of regulation often leave shipowners unsure of the laws facing them. Furthemore, nations have had difficulty enforcing these legal requirements. This thesis deals with the role marine insurance can play within the existing system of legislation to provide a strong preventative influence that is simple and cost-effective to enforce. In principle, insurance has two ways of enforcing higher safety standards and limiting the risk of an accident occurring. The first is through the use of insurance premiums that are based on the level of care taken by the insured. This means that a person engaging in riskier behavior faces a higher insurance premium, because their actions increase the probability of an accident occurring. The second method, available to the insurer, is collectively known as cancellation provisions or underwriting clauses. These are clauses written into an insurance contract that invalidates the agreement when certain conditions are not met by the insured The problem has been that obtaining information about the behavior of an insured party requires monitoring and that incurs a cost to the insurer. The application of these principles proves to be a more complicated matter. The modern marine insurance industry is a complicated system of multiple contracts, through different insurers, that covers the many facets of oil transportation. Their business practices have resulted in policy packages that cross the neat bounds of individual, specific insurance coverage. This paper shows that insurance can improve safety standards in three general areas -crew training, hull and equipment construction and maintenance, and routing schemes and exclusionary zones. With crew, hull and equipment, underwriting clauses can be used to ensure that minimum standards are met by the insured. Premiums can then be structured to reflect the additional care taken by the insured above and beyond these minimum standards. Routing schemes are traffic flow systems applied to congested waterways, such as the entrance to New York harbor. Using natural obstacles or manmade dividers, ships are separated into two lanes of opposing traffic, similar to a road. Exclusionary zones are marine areas designated off limits to tanker traffic either because of a sensitive ecosystem or because local knowledge is required of the region to ensure safe navigation. Underwriting clauses can be used to nullify an insurance contract when a tanker is not in compliance with established exclusionary zones or routing schemes.
Resumo:
The goal of this paper is to show the possibility of a non-monotone relation between coverage ans risk which has been considered in the literature of insurance models since the work of Rothschild and Stiglitz (1976). We present an insurance model where the insured agents have heterogeneity in risk aversion and in lenience (a prevention cost parameter). Risk aversion is described by a continuous parameter which is correlated with lenience and for the sake of simplicity, we assume perfect correlation. In the case of positive correlation, the more risk averse agent has higher cosr of prevention leading to a higher demand for coverage. Equivalently, the single crossing property (SCP) is valid and iplies a positive correlation between overage and risk in equilibrium. On the other hand, if the correlation between risk aversion and lenience is negative, not only may the SCP be broken, but also the monotonocity of contracts, i.e., the prediction that high (low) risk averse types choose full (partial) insurance. In both cases riskiness is monotonic in risk aversion, but in the last case there are some coverage levels associated with two different risks (low and high), which implies that the ex-ante (with respect to the risk aversion distribution) correlation between coverage and riskiness may have every sign (even though the ex-post correlation is always positive). Moreover, using another instrument (a proxy for riskiness), we give a testable implication to desentangle single crossing ans non single croosing under an ex-post zero correlation result: the monotonicity of coverage as a function os riskiness. Since by controlling for risk aversion (no asymmetric information), coverage is monotone function of riskiness, this also fives a test for asymmetric information. Finally, we relate this theoretical results to empirical tests in the recent literature, specially the Dionne, Gouruéroux and Vanasse (2001) work. In particular, they found an empirical evidence that seems to be compatible with asymmetric information and non single crossing in our framework. More generally, we build a hidden information model showing how omitted variables (asymmetric information) can bias the sign of the correlation of equilibrium variables conditioning on all observable variables. We show that this may be the case when the omitted variables have a non-monotonic relation with the observable ones. Moreover, because this non-dimensional does not capture this deature. Hence, our main results is to point out the importance of the SPC in testing predictions of the hidden information models.
Resumo:
One of the central problems in contract law is to define the frontier between legal and illegal breaches of promises. The distinction between good and bad faith is perhaps the conceptual tool most commonly used to tell one from the other. Lawyers spend a lot of energy trying to frame better definitions of the concepts of good and bad faith based on principles of ethics or justice, but often pay much less attention to theories dealing with the incentives that can engender good faith behavior in contractual relationships. By describing the economics of what Stiglitz defined as “explicit” and “implicit” insurance, I highlight the “insurance function” hidden in any promise with basically no mathematical notation. My aim is to render the subject intelligible and useful to lawyers with little familiarity with economics.
Resumo:
Neste trabalho propomos a aplicação das noções de equilíbrio da recente literatura de desenho de mecanismo robusto com aquisição de informação endógena a um problema de divisão de risco entre dois agentes. Através deste exemplo somos capazes de motivar o uso desta noção de equilíbrio, assim como discutir os efeitos da introdu ção de uma restrição de participação que seja dependente da informação. A simplicidade do modelo nos permite caracterizar a possibilidade de implementar a alocação Pareto efiente em termos do custo de aquisição da informação. Além disso, mostramos que a precisão da informação pode ter um efeito negativo sobre a implementação da alocação efi ciente. Ao final, sao dados dois exemplos específicos de situações nas quais este modelo se aplica.