995 resultados para Probabilistic Modelling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: Long-term exposure of skylarks to a fictitious insecticide and of wood mice to a fictitious fungicide were modelled probabilistically in a Monte Carlo simulation. Within the same simulation the consequences of exposure to pesticides on reproductive success were modelled using the toxicity-exposure-linking rules developed by R.S. Bennet et al. (2005) and the interspecies extrapolation factors suggested by R. Luttik et al.(2005). We built models to reflect a range of scenarios and as a result were able to show how exposure to pesticide might alter the number of individuals engaged in any given phase of the breeding cycle at any given time and predict the numbers of new adults at the season’s end.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Event extraction from texts aims to detect structured information such as what has happened, to whom, where and when. Event extraction and visualization are typically considered as two different tasks. In this paper, we propose a novel approach based on probabilistic modelling to jointly extract and visualize events from tweets where both tasks benefit from each other. We model each event as a joint distribution over named entities, a date, a location and event-related keywords. Moreover, both tweets and event instances are associated with coordinates in the visualization space. The manifold assumption that the intrinsic geometry of tweets is a low-rank, non-linear manifold within the high-dimensional space is incorporated into the learning framework using a regularization. Experimental results show that the proposed approach can effectively deal with both event extraction and visualization and performs remarkably better than both the state-of-the-art event extraction method and a pipeline approach for event extraction and visualization.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This study focus on the probabilistic modelling of mechanical properties of prestressing strands based on data collected from tensile tests carried out in Laboratório Nacional de Engenharia Civil (LNEC), Portugal, for certification purposes, and covers a period of about 9 years of production. The strands studied were produced by six manufacturers from four countries, namely Portugal, Spain, Italy and Thailand. Variability of the most important mechanical properties is examined and the results are compared with the recommendations of the Probabilistic Model Code, as well as the Eurocodes and earlier studies. The obtained results show a very low variability which, of course, benefits structural safety. Based on those results, probabilistic models for the most important mechanical properties of prestressing strands are proposed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This study focus on the probabilistic modelling of mechanical properties of prestressing strands based on data collected from tensile tests carried out in Laboratório Nacional de Engenharia Civil (LNEC), Portugal, for certification purposes, and covers a period of about 9 years of production. The strands studied were produced by six manufacturers from four countries, namely Portugal, Spain, Italy and Thailand. Variability of the most important mechanicalproperties is examined and the results are compared with the recommendations of the ProbabilisticModel Code, as well as the Eurocodes and earlier studies. The obtained results show a very low variability which, of course, benefits structural safety. Based on those results, probabilistic modelsfor the most important mechanical properties of prestressing strands are proposed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A wide variety of exposure models are currently employed for health risk assessments. Individual models have been developed to meet the chemical exposure assessment needs of Government, industry and academia. These existing exposure models can be broadly categorised according to the following types of exposure source: environmental, dietary, consumer product, occupational, and aggregate and cumulative. Aggregate exposure models consider multiple exposure pathways, while cumulative models consider multiple chemicals. In this paper each of these basic types of exposure model are briefly described, along with any inherent strengths or weaknesses, with the UK as a case study. Examples are given of specific exposure models that are currently used, or that have the potential for future use, and key differences in modelling approaches adopted are discussed. The use of exposure models is currently fragmentary in nature. Specific organisations with exposure assessment responsibilities tend to use a limited range of models. The modelling techniques adopted in current exposure models have evolved along distinct lines for the various types of source. In fact different organisations may be using different models for very similar exposure assessment situations. This lack of consistency between exposure modelling practices can make understanding the exposure assessment process more complex, can lead to inconsistency between organisations in how critical modelling issues are addressed (e.g. variability and uncertainty), and has the potential to communicate mixed messages to the general public. Further work should be conducted to integrate the various approaches and models, where possible and regulatory remits allow, to get a coherent and consistent exposure modelling process. We recommend the development of an overall framework for exposure and risk assessment with common approaches and methodology, a screening tool for exposure assessment, collection of better input data, probabilistic modelling, validation of model input and output and a closer working relationship between scientists and policy makers and staff from different Government departments. A much increased effort is required is required in the UK to address these issues. The result will be a more robust, transparent, valid and more comparable exposure and risk assessment process. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a methodology that aims to increase the probability of delivering power to any load point of the electrical distribution system by identifying new investments in distribution components. The methodology is based on statistical failure and repair data of the distribution power system components and it uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A mixed integer non-linear optimization technique is developed to identify adequate investments in distribution networks components that allow increasing the availability level for any customer in the distribution system at minimum cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a real distribution network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a methodology for distribution networks reconfiguration in outage presence in order to choose the reconfiguration that presents the lower power losses. The methodology is based on statistical failure and repair data of the distribution power system components and uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. Once obtained the system states by Monte Carlo simulation, a logical programming algorithm is applied to get all possible reconfigurations for every system state. In order to evaluate the line flows and bus voltages and to identify if there is any overloading, and/or voltage violation a distribution power flow has been applied to select the feasible reconfiguration with lower power losses. To illustrate the application of the proposed methodology to a practical case, the paper includes a case study that considers a real distribution network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Old timber structures may show significant variation in the cross section geometry along the same element, as a result of both construction methods and deterioration. As consequence, the definition of the geometric parameters in situ may be both time consuming and costly. This work presents the results of inspections carried out in different timber structures. Based on the obtained results, different simplified geometric models are proposed in order to efficiently model the geometry variations found. Probabilistic modelling techniques are also used to define safety parameters of existing timber structures, when subjected to dead and live loads, namely self-weight and wind actions. The parameters of the models have been defined as probabilistic variables, and safety of a selected case study was assessed using the Monte Carlo simulation technique. Assuming a target reliability index, a model was defined for both the residual cross section and the time dependent deterioration evolution. As a consequence, it was possible to compute probabilities of failure and reliability indices, as well as, time evolution deterioration curves for this structure. The results obtained provide a proposal for definition of the cross section geometric parameters of existing timber structures with different levels of decay, using a simplified probabilistic geometry model and considering a remaining capacity factor for the decayed areas. This model can be used for assessing the safety of the structure at present and for predicting future performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Brazil will host the FIFA World Cup™, the biggest single-event competition in the world, from June 12-July 13 2014 in 12 cities. This event will draw an estimated 600,000 international visitors. Brazil is endemic for dengue. Hence, attendees of the 2014 event are theoretically at risk for dengue. We calculated the risk of dengue acquisition to non-immune international travellers to Brazil, depending on the football match schedules, considering locations and dates of such matches for June and July 2014. We estimated the average per-capita risk and expected number of dengue cases for each host-city and each game schedule chosen based on reported dengue cases to the Brazilian Ministry of Health for the period between 2010-2013. On the average, the expected number of cases among the 600,000 foreigner tourists during the World Cup is 33, varying from 3-59. Such risk estimates will not only benefit individual travellers for adequate pre-travel preparations, but also provide valuable information for public health professionals and policy makers worldwide. Furthermore, estimates of dengue cases in international travellers during the World Cup can help to anticipate the theoretical risk for exportation of dengue into currently non-infected areas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This master's thesis examines scenarios, which can lead to a reactivity excursion due to boron dilution events in Loviisa nuclear power plant. This thesis also describes how the boron diluted slugs are modelled in the Probabilistic Risk Assessment (PRA) model. At the current model, the valuation of the reactivity risk due to boron dilution has been very conservative, and therefore the reactivity risk during an outage is as much as 9 % of Core Damage Frequency (CDF) and Large Release Frequency (LRF). The main objective of the thesis is to decrease the annual core damage and Large Release Frequency by reducing conservative assumptions in the probabilistic modelling of boron dilution events. The core behavior during boron dilution events was modelled and reported in 2011 by Fortum using three-dimensional core model of Apros. The results of these analyses were reported same year by Fortum. According to the reported results and the analyses made by StarNode visualization program, it seems that some changes could be made in the boron dilution fault trees of the PRA model. As a result, the reactivity risk was decreased to 0.7 % of the annual CDF. In the other words, the total annual CDF and LRF were decreased 8.5 % due to the changes made in the PRA model. However the analyses made by Apros include some uncertainty. The accuracy of the analyses should be validated before making these changes in the official PRA model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Three main changes to current risk analysis processes are proposed to improve their transparency, openness, and accountability. First, the addition of a formal framing stage would allow interested parties, experts and officials to work together as needed to gain an initial shared understanding of the issue, the objectives of regulatory action, and alternative risk management measures. Second, the scope of the risk assessment is expanded to include the assessment of health and environmental benefits as well as risks, and the explicit consideration of economic- and social-impacts of risk management action and their distribution. Moreover approaches were developed for deriving improved information from genomic, proteomic and metabolomic profiling methods and for probabilistic modelling of health impacts for risk assessment purposes. Third, in an added evaluation stage, interested parties, experts, and officials may compare and weigh the risks, costs, and benefits and their distribution. As part of a set of recommendations on risk communication, we propose that reports on each stage should be made public.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work describes the probabilistic modelling af a Bayesian-based mechanism to improve location estimates of an already deployed location system by fusing its outputs with low-cost binary sensors. This mechanism takes advantege of the localization captabilities of different technologies usually present in smart environments deployments. The performance of the proposed algorithm over a real sensor deployment is evaluated using simulated and real experimental data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A practical Bayesian approach for inference in neural network models has been available for ten years, and yet it is not used frequently in medical applications. In this chapter we show how both regularisation and feature selection can bring significant benefits in diagnostic tasks through two case studies: heart arrhythmia classification based on ECG data and the prognosis of lupus. In the first of these, the number of variables was reduced by two thirds without significantly affecting performance, while in the second, only the Bayesian models had an acceptable accuracy. In both tasks, neural networks outperformed other pattern recognition approaches.