906 resultados para Timed and Probabilistic Automata


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Severe wind storms are one of the major natural hazards in the extratropics and inflict substantial economic damages and even casualties. Insured storm-related losses depend on (i) the frequency, nature and dynamics of storms, (ii) the vulnerability of the values at risk, (iii) the geographical distribution of these values, and (iv) the particular conditions of the risk transfer. It is thus of great importance to assess the impact of climate change on future storm losses. To this end, the current study employs—to our knowledge for the first time—a coupled approach, using output from high-resolution regional climate model scenarios for the European sector to drive an operational insurance loss model. An ensemble of coupled climate-damage scenarios is used to provide an estimate of the inherent uncertainties. Output of two state-of-the-art global climate models (HadAM3, ECHAM5) is used for present (1961–1990) and future climates (2071–2100, SRES A2 scenario). These serve as boundary data for two nested regional climate models with a sophisticated gust parametrizations (CLM, CHRM). For validation and calibration purposes, an additional simulation is undertaken with the CHRM driven by the ERA40 reanalysis. The operational insurance model (Swiss Re) uses a European-wide damage function, an average vulnerability curve for all risk types, and contains the actual value distribution of a complete European market portfolio. The coupling between climate and damage models is based on daily maxima of 10 m gust winds, and the strategy adopted consists of three main steps: (i) development and application of a pragmatic selection criterion to retrieve significant storm events, (ii) generation of a probabilistic event set using a Monte-Carlo approach in the hazard module of the insurance model, and (iii) calibration of the simulated annual expected losses with a historic loss data base. The climate models considered agree regarding an increase in the intensity of extreme storms in a band across central Europe (stretching from southern UK and northern France to Denmark, northern Germany into eastern Europe). This effect increases with event strength, and rare storms show the largest climate change sensitivity, but are also beset with the largest uncertainties. Wind gusts decrease over northern Scandinavia and Southern Europe. Highest intra-ensemble variability is simulated for Ireland, the UK, the Mediterranean, and parts of Eastern Europe. The resulting changes on European-wide losses over the 110-year period are positive for all layers and all model runs considered and amount to 44% (annual expected loss), 23% (10 years loss), 50% (30 years loss), and 104% (100 years loss). There is a disproportionate increase in losses for rare high-impact events. The changes result from increases in both severity and frequency of wind gusts. Considerable geographical variability of the expected losses exists, with Denmark and Germany experiencing the largest loss increases (116% and 114%, respectively). All countries considered except for Ireland (−22%) experience some loss increases. Some ramifications of these results for the socio-economic sector are discussed, and future avenues for research are highlighted. The technique introduced in this study and its application to realistic market portfolios offer exciting prospects for future research on the impact of climate change that is relevant for policy makers, scientists and economists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new dynamic model of water quality, Q(2), has recently been developed, capable of simulating large branched river systems. This paper describes the application of a generalized sensitivity analysis (GSA) to Q(2) for single reaches of the River Thames in southern England. Focusing on the simulation of dissolved oxygen (DO) (since this may be regarded as a proxy for the overall health of a river); the GSA is used to identify key parameters controlling model behavior and provide a probabilistic procedure for model calibration. It is shown that, in the River Thames at least, it is more important to obtain high quality forcing functions than to obtain improved parameter estimates once approximate values have been estimated. Furthermore, there is a need to ensure reasonable simulation of a range of water quality determinands, since a focus only on DO increases predictive uncertainty in the DO simulations. The Q(2) model has been applied here to the River Thames, but it has a broad utility for evaluating other systems in Europe and around the world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Internationally agreed standard protocols for assessing chemical toxicity of contaminants in soil to worms assume that the test soil does not need to equilibrate with the chemical to be tested prior to the addition of the test organisms and that the chemical will exert any toxic effect upon the test organism within 28 days. Three experiments were carried out to investigate these assumptions. The first experiment was a standard toxicity test where lead nitrate was added to a soil in solution to give a range of concentrations. The mortality of the worms and the concentration of lead in the survivors were determined. The LC(50)s for 14 and 28 days were 5311 and 5395 mug(Pb) g(soil)(-1) respectively. The second experiment was a timed lead accumulation study with worms cultivated in soil containing either 3000 or 5000 mug(Pb) g(soil)(-1). The concentration of lead in the worms was determined at various sampling times. Uptake at so' Sol both concentrations was linear with time. Worms in the 5000 mug g(-1) soil accumulated lead at a faster rate (3.16 mug Pb g(tissue)(-1) day(-1)) tiss than those in the 3000 mug g(-1) soil (2.21 mug Pb-tissue g(-1) day(-1)). The third experiment was a timed experiment with worms cultivated in tiss soil containing 7000 mugPb g(soil)(-1). Soil and lead nitrate solution were mixed and stored at 20 degreesC. Worms were added at various times over a 35-day period. The time to death increased from 23 h, when worms were added directly after the lead was added to the soil, to 67 It when worms were added after the soil had equilibrated with the lead for 35 days. In artificially Pb-amended soils the worms accumulate Pb over the duration of their exposure to the Pb. Thus time limited toxicity tests may be terminated before worm body load has reached a toxic level. This could result in under-estimates of the toxicity of Pb to worms. As the equilibration time of artificially amended Pb-bearing soils increases the bioavailability of Pb decreases. Thus addition of worms shortly after addition of Pb to soils may result in the over-estimate of Pb toxicity to worms. The current OECD acute worm toxicity test fails to take these two phenomena into account thereby reducing the environmental relevance of the contaminant toxicities it is used to calculate. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A wide variety of exposure models are currently employed for health risk assessments. Individual models have been developed to meet the chemical exposure assessment needs of Government, industry and academia. These existing exposure models can be broadly categorised according to the following types of exposure source: environmental, dietary, consumer product, occupational, and aggregate and cumulative. Aggregate exposure models consider multiple exposure pathways, while cumulative models consider multiple chemicals. In this paper each of these basic types of exposure model are briefly described, along with any inherent strengths or weaknesses, with the UK as a case study. Examples are given of specific exposure models that are currently used, or that have the potential for future use, and key differences in modelling approaches adopted are discussed. The use of exposure models is currently fragmentary in nature. Specific organisations with exposure assessment responsibilities tend to use a limited range of models. The modelling techniques adopted in current exposure models have evolved along distinct lines for the various types of source. In fact different organisations may be using different models for very similar exposure assessment situations. This lack of consistency between exposure modelling practices can make understanding the exposure assessment process more complex, can lead to inconsistency between organisations in how critical modelling issues are addressed (e.g. variability and uncertainty), and has the potential to communicate mixed messages to the general public. Further work should be conducted to integrate the various approaches and models, where possible and regulatory remits allow, to get a coherent and consistent exposure modelling process. We recommend the development of an overall framework for exposure and risk assessment with common approaches and methodology, a screening tool for exposure assessment, collection of better input data, probabilistic modelling, validation of model input and output and a closer working relationship between scientists and policy makers and staff from different Government departments. A much increased effort is required is required in the UK to address these issues. The result will be a more robust, transparent, valid and more comparable exposure and risk assessment process. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While the standard models of concentration addition and independent action predict overall toxicity of multicomponent mixtures reasonably, interactions may limit the predictive capability when a few compounds dominate a mixture. This study was conducted to test if statistically significant systematic deviations from concentration addition (i.e. synergism/antagonism, dose ratio- or dose level-dependency) occur when two taxonomically unrelated species, the earthworm Eisenia fetida and the nematode Caenorhabditis elegans were exposed to a full range of mixtures of the similar acting neonicotinoid pesticides imidacloprid and thiacloprid. The effect of the mixtures on C. elegans was described significantly better (p<0.01) by a dose level-dependent deviation from the concentration addition model than by the reference model alone, while the reference model description of the effects on E. fetida could not be significantly improved. These results highlight that deviations from concentration addition are possible even with similar acting compounds, but that the nature of such deviations are species dependent. For improving ecological risk assessment of simple mixtures, this implies that the concentration addition model may need to be used in a probabilistic context, rather than in its traditional deterministic manner. Crown Copyright (C) 2008 Published by Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Negative correlations between task performance in dynamic control tasks and verbalizable knowledge, as assessed by a post-task questionnaire, have been interpreted as dissociations that indicate two antagonistic modes of learning, one being “explicit”, the other “implicit”. This paper views the control tasks as finite-state automata and offers an alternative interpretation of these negative correlations. It is argued that “good controllers” observe fewer different state transitions and, consequently, can answer fewer post-task questions about system transitions than can “bad controllers”. Two experiments demonstrate the validity of the argument by showing the predicted negative relationship between control performance and the number of explored state transitions, and the predicted positive relationship between the number of explored state transitions and questionnaire scores. However, the experiments also elucidate important boundary conditions for the critical effects. We discuss the implications of these findings, and of other problems arising from the process control paradigm, for conclusions about implicit versus explicit learning processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 2002 India experienced a severe drought, one among the five worst droughts since records began in 1871, notable for its countrywide influence. The drought was primarily due to an unprecedented break in the monsoon during July, which persisted for almost the whole month and affected most of the sub-continent. The failure of the monsoon in 2002 was not predicted and India was not prepared for the devastating impacts on, for example, agriculture. This paper documents the evolution of the 2002 Indian summer monsoon and considers the possible factors that contributed to the drought and the failure of the forecasts. The development of the 2002/2003 El Nino and the unusually high levels of Madden-Julian Oscillation (MJO) activity during the monsoon season are identified as the central players. The 2002/2003 El Nino was characterised by very high sea-surface temperatures (SSTs) in the central Pacific that developed rapidly during the monsoon season. It is suggested that the unusual character of the developing El Nino was associated with the MJO and was a consequence of the eastward extension of the West Pacific Warm Pool, brought about primarily by a series of westerly wind events (WWEs) as part of the eastward movement of the active phase of the MJO. During the boreal summer, the MJO is usually characterised by northward movement, but in 2002 the northward component of the MJO was weak and the MJO was dominated by a strong eastward component, probably driven by the abnormally high SSTs in the central Pacific. It is suggested that a positive feedback existed between the developing El Nino and the eastward component of the MJO, which weakened the active phases of the monsoon. In particular, the unprecedented monsoon break in July could be associated with the juxtaposition of strong MJO activity with a developing El Nino, both of which interfered constructively with each other to produce major perturbations to the distribution of tropical heating. Subsequently, the main impact of the developing El Nino was a modulation of the Walker circulation that led to the overall suppression of the Indian monsoon during thess latter part of the season. It is argued that the unique combination of a rapidly developing El Nino and strong MJO activity, which was timed within the seasonal cycle to have maximum impact on the Indian summer monsoon, meant that prediction of the prolonged break in July and the seasonally deficient rainfall was a challenge for both the empirical and dynamical forecasting systems. Copyright (C) 2006 Royal Meteorological Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improvements in the resolution of satellite imagery have enabled extraction of water surface elevations at the margins of the flood. Comparison between modelled and observed water surface elevations provides a new means for calibrating and validating flood inundation models, however the uncertainty in this observed data has yet to be addressed. Here a flood inundation model is calibrated using a probabilistic treatment of the observed data. A LiDAR guided snake algorithm is used to determine an outline of a flood event in 2006 on the River Dee, North Wales, UK, using a 12.5m ERS-1 image. Points at approximately 100m intervals along this outline are selected, and the water surface elevation recorded as the LiDAR DEM elevation at each point. With a planar water surface from the gauged upstream to downstream water elevations as an approximation, the water surface elevations at points along this flooded extent are compared to their ‘expected’ value. The pattern of errors between the two show a roughly normal distribution, however when plotted against coordinates there is obvious spatial autocorrelation. The source of this spatial dependency is investigated by comparing errors to the slope gradient and aspect of the LiDAR DEM. A LISFLOOD-FP model of the flood event is set-up to investigate the effect of observed data uncertainty on the calibration of flood inundation models. Multiple simulations are run using different combinations of friction parameters, from which the optimum parameter set will be selected. For each simulation a T-test is used to quantify the fit between modelled and observed water surface elevations. The points chosen for use in this T-test are selected based on their error. The criteria for selection enables evaluation of the sensitivity of the choice of optimum parameter set to uncertainty in the observed data. This work explores the observed data in detail and highlights possible causes of error. The identification of significant error (RMSE = 0.8m) between approximate expected and actual observed elevations from the remotely sensed data emphasises the limitations of using this data in a deterministic manner within the calibration process. These limitations are addressed by developing a new probabilistic approach to using the observed data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ECMWF ensemble weather forecasts are generated by perturbing the initial conditions of the forecast using a subset of the singular vectors of the linearised propagator. Previous results show that when creating probabilistic forecasts from this ensemble better forecasts are obtained if the mean of the spread and the variability of the spread are calibrated separately. We show results from a simple linear model that suggest that this may be a generic property for all singular vector based ensemble forecasting systems based on only a subset of the full set of singular vectors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Process-based integrated modelling of weather and crop yield over large areas is becoming an important research topic. The production of the DEMETER ensemble hindcasts of weather allows this work to be carried out in a probabilistic framework. In this study, ensembles of crop yield (groundnut, Arachis hypogaea L.) were produced for 10 2.5 degrees x 2.5 degrees grid cells in western India using the DEMETER ensembles and the general large-area model (GLAM) for annual crops. Four key issues are addressed by this study. First, crop model calibration methods for use with weather ensemble data are assessed. Calibration using yield ensembles was more successful than calibration using reanalysis data (the European Centre for Medium-Range Weather Forecasts 40-yr reanalysis, ERA40). Secondly, the potential for probabilistic forecasting of crop failure is examined. The hindcasts show skill in the prediction of crop failure, with more severe failures being more predictable. Thirdly, the use of yield ensemble means to predict interannual variability in crop yield is examined and their skill assessed relative to baseline simulations using ERA40. The accuracy of multi-model yield ensemble means is equal to or greater than the accuracy using ERA40. Fourthly, the impact of two key uncertainties, sowing window and spatial scale, is briefly examined. The impact of uncertainty in the sowing window is greater with ERA40 than with the multi-model yield ensemble mean. Subgrid heterogeneity affects model accuracy: where correlations are low on the grid scale, they may be significantly positive on the subgrid scale. The implications of the results of this study for yield forecasting on seasonal time-scales are as follows. (i) There is the potential for probabilistic forecasting of crop failure (defined by a threshold yield value); forecasting of yield terciles shows less potential. (ii) Any improvement in the skill of climate models has the potential to translate into improved deterministic yield prediction. (iii) Whilst model input uncertainties are important, uncertainty in the sowing window may not require specific modelling. The implications of the results of this study for yield forecasting on multidecadal (climate change) time-scales are as follows. (i) The skill in the ensemble mean suggests that the perturbation, within uncertainty bounds, of crop and climate parameters, could potentially average out some of the errors associated with mean yield prediction. (ii) For a given technology trend, decadal fluctuations in the yield-gap parameter used by GLAM may be relatively small, implying some predictability on those time-scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The steadily accumulating literature on technical efficiency in fisheries attests to the importance of efficiency as an indicator of fleet condition and as an object of management concern. In this paper, we extend previous work by presenting a Bayesian hierarchical approach that yields both efficiency estimates and, as a byproduct of the estimation algorithm, probabilistic rankings of the relative technical efficiencies of fishing boats. The estimation algorithm is based on recent advances in Markov Chain Monte Carlo (MCMC) methods—Gibbs sampling, in particular—which have not been widely used in fisheries economics. We apply the method to a sample of 10,865 boat trips in the US Pacific hake (or whiting) fishery during 1987–2003. We uncover systematic differences between efficiency rankings based on sample mean efficiency estimates and those that exploit the full posterior distributions of boat efficiencies to estimate the probability that a given boat has the highest true mean efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a statistical method for detecting recombination in DNA sequence alignments, which is based on combining two probabilistic graphical models: (1) a taxon graph (phylogenetic tree) representing the relationship between the taxa, and (2) a site graph (hidden Markov model) representing interactions between different sites in the DNA sequence alignments. We adopt a Bayesian approach and sample the parameters of the model from the posterior distribution with Markov chain Monte Carlo, using a Metropolis-Hastings and Gibbs-within-Gibbs scheme. The proposed method is tested on various synthetic and real-world DNA sequence alignments, and we compare its performance with the established detection methods RECPARS, PLATO, and TOPAL, as well as with two alternative parameter estimation schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The utility of repeated salivary cortisol sampling as a substitute for 24-hour urinary-free cortisol (UFC) assessment was examined. Forty-four participants completed both 24-hour collections and 6 salivary collections at wake-up, 08:00, 12:00, 16:00, 20:00 and bedtime, during the same 24-hour period. The results demonstrated that mean, maximum, and amplitude (maximum minus minimum) for salivary cortisol all correlated positively with urinary cortisol, but the associations of these variables with urinary-free cortisol excretion were relatively small. Furthermore, a single salivary sample taken at wake-up was as good an indicator of overall cortisol production as the measures derived from multiple salivary samples. An examination of subject compliance indicated that many subjects failed to collect the timed salivary collections as instructed. The authors conclude that diurnal salivary cortisol sampling versus 24-hour urinary cortisol collections are likely to provide different information about ambient hypothalamic-pituitary-adrenal productivity, and therefore these measures should not be used interchangeably. In addition, subject compliance is a serious consideration in designing studies that employ home salivary collections. Published by Elsevier Science Inc.