906 resultados para Timed and Probabilistic Automata
Resumo:
We introduce two probabilistic, data-driven models that predict a ship's speed and the situations where a ship is probable to get stuck in ice based on the joint effect of ice features such as the thickness and concentration of level ice, ice ridges, rafted ice, moreover ice compression is considered. To develop the models to datasets were utilized. First, the data from the Automatic Identification System about the performance of a selected ship was used. Second, a numerical ice model HELMI, developed in the Finnish Meteorological Institute, provided information about the ice field. The relations between the ice conditions and ship movements were established using Bayesian learning algorithms. The case study presented in this paper considers a single and unassisted trip of an ice-strengthened bulk carrier between two Finnish ports in the presence of challenging ice conditions, which varied in time and space. The obtained results show good prediction power of the models. This means, on average 80% for predicting the ship's speed within specified bins, and above 90% for predicting cases where a ship may get stuck in ice. We expect this new approach to facilitate the safe and effective route selection problem for ice-covered waters where the ship performance is reflected in the objective function.
Resumo:
Existing models estimating oil spill costs at sea are based on data from the past, and they usually lack a systematic approach. This make them passive, and limits their ability to forecast the effect of the changes in the oil combating fleet or location of a spill on the oil spill costs. In this paper we make an attempt towards the development of a probabilistic and systematic model estimating the costs of clean-up operations for the Gulf of Finland. For this purpose we utilize expert knowledge along with the available data and information from literature. Then, the obtained information is combined into a framework with the use of a Bayesian Belief Networks. Due to lack of data, we validate the model by comparing its results with existing models, with which we found good agreement. We anticipate that the presented model can contribute to the cost-effective oil-combating fleet optimization for the Gulf of Finland. It can also facilitate the accident consequences estimation in the framework of formal safety assessment (FSA).
Resumo:
The selection of metrics for ecosystem restoration programs is critical for improving the quality of monitoring programs and characterizing project success. Moreover it is oftentimes very difficult to balance the importance of multiple ecological, social, and economical metrics. Metric selection process is a complex and must simultaneously take into account monitoring data, environmental models, socio-economic considerations, and stakeholder interests. We propose multicriteria decision analysis (MCDA) methods, broadly defined, for the selection of optimal sets of metrics to enhance evaluation of ecosystem restoration alternatives. Two MCDA methods, a multiattribute utility analysis (MAUT), and a probabilistic multicriteria acceptability analysis (ProMAA), are applied and compared for a hypothetical case study of a river restoration involving multiple stakeholders. Overall, the MCDA results in a systematic, unbiased, and transparent solution, informing restoration alternatives evaluation. The two methods provide comparable results in terms of selected metrics. However, because ProMAA can consider probability distributions for weights and utility values of metrics for each criteria, it is suggested as the best option if data uncertainty is high. Despite the increase in complexity in the metric selection process, MCDA improves upon the current ad-hoc decision practice based on the consultations with stakeholders and experts, and encourages transparent and quantitative aggregation of data and judgement, increasing the transparency of decision making in restoration projects. We believe that MCDA can enhance the overall sustainability of ecosystem by enhancing both ecological and societal needs.
Resumo:
Probabilistic climate data have become available for the first time through the UK Climate Projections 2009, so that the risk of tree growth change can be quantified. We assess the drought risk spatially and temporally using drought probabilities and tree species vulnerabilities across Britain. We assessed the drought impact on the potential yield class of three major tree species (Picea sitchensis, Pinus sylvestris, and Quercus robur) which presently cover around 59% (400,700 ha) of state-managed forests, across lowland and upland sites. Here we show that drought impacts result mostly in reduced tree growth over the next 80 years when using b1, a1b and a1fi IPCC emissions scenarios. We found a maximum reduction of 94% but also a maximum increase of 56% in potential stand yield class in the 2080s from the baseline climate (1961-1990). Furthermore, potential production over the national forest estate for all three species in the 2080s may decrease due to drought by 42% in the lowlands and 32% in the uplands in comparison to the baseline climate. Our results reveal that potential tree growth and forest production on the national forest estate in Britain is likely to reduce, and indicate where and when adaptation measures are required. Moreover, this paper demonstrates the value of probabilistic climate projections for an important economic and environmental sector.
Resumo:
This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences.
Resumo:
The adaptation to the European Higher Education Area (EHEA) is becoming a great challenge for the University Community, especially for its teaching and research staff, which is involved actively in the teaching-learning process. It is also inducing a paradigm change for lecturers and students. Among the methodologies used for processes of teaching innovation, system thinking plays an important role when working mainly with mind maps, and is focused to highlighting the essence of the knowledge, allowing its visual representation. In this paper, a method for using these mind maps for organizing a particular subject is explained. This organization is completed with the definition of duration, precedence relationships and resources for each of these activities, as well as with their corresponding monitoring. Mind maps are generated by means of the MINDMANAGER package whilst Ms-PROJECT is used for establishing tasks relationships, durations, resources, and monitoring. Summarizing, a procedure and the necessary set of applications for self organizing and managing (timed) scheduled teaching tasks has been described in this paper.
Resumo:
The adaptation to the European Higher Education Area (EHEA) is becoming a great challenge for the University Community, especially for its teaching and research staff, which is involved actively in the teaching-learning process. It is also inducing a paradigm change for lecturers and students. Among the methodologies used for processes of teaching innovation, system thinking plays an important role when working mainly with mind maps, and is focused to highlighting the essence of the knowledge, allowing its visual representation. In this paper, a method for using these mind maps for organizing a particular subject is explained. This organization is completed with the definition of duration, precedence relationships and resources for each of these activities, as well as with their corresponding monitoring. Mind maps are generated by means of the MINDMANAGER package whilst Ms-PROJECT is used for establishing tasks relationships, durations, resources, and monitoring. Summarizing, a procedure and the necessary set of applications for self organizing and managing (timed) scheduled teaching tasks has been described in this paper
Resumo:
In this paper, the presynaptic rule, a classical rule for hebbian learning, is revisited. It is shown that the presynaptic rule exhibits relevant synaptic properties like synaptic directionality, and LTP metaplasticity (long-term potentiation threshold metaplasticity). With slight modifications, the presynaptic model also exhibits metaplasticity of the long-term depression threshold, being also consistent with Artola, Brocher and Singer’s (ABS) influential model. Two asymptotically equivalent versions of the presynaptic rule were adopted for this analysis: the first one uses an incremental equation while the second, conditional probabilities. Despite their simplicity, both types of presynaptic rules exhibit sophisticated biological properties, specially the probabilistic version
Resumo:
The authors are from UPM and are relatively grouped, and all have intervened in different academic or real cases on the subject, at different times as being of different age. With precedent from E. Torroja and A. Páez in Madrid Spain Safety Probabilistic models for concrete about 1957, now in ICOSSAR conferences, author J.M. Antón involved since autumn 1967 for euro-steel construction in CECM produced a math model for independent load superposition reductions, and using it a load coefficient pattern for codes in Rome Feb. 1969, practically adopted for European constructions, giving in JCSS Lisbon Feb. 1974 suggestion of union for concrete-steel-al.. That model uses model for loads like Gumbel type I, for 50 years for one type of load, reduced to 1 year to be added to other independent loads, the sum set in Gumbel theories to 50 years return period, there are parallel models. A complete reliability system was produced, including non linear effects as from buckling, phenomena considered somehow in actual Construction Eurocodes produced from Model Codes. The system was considered by author in CEB in presence of Hydraulic effects from rivers, floods, sea, in reference with actual practice. When redacting a Road Drainage Norm in MOPU Spain an optimization model was realized by authors giving a way to determine the figure of Return Period, 10 to 50 years, for the cases of hydraulic flows to be considered in road drainage. Satisfactory examples were a stream in SE of Spain with Gumbel Type I model and a paper of Ven Te Chow with Mississippi in Keokuk using Gumbel type II, and the model can be modernized with more varied extreme laws. In fact in the MOPU drainage norm the redacting commission acted also as expert to set a table of return periods for elements of road drainage, in fact as a multi-criteria complex decision system. These precedent ideas were used e.g. in wide Codes, indicated in symposia or meetings, but not published in journals in English, and a condensate of contributions of authors is presented. The authors are somehow involved in optimization for hydraulic and agro planning, and give modest hints of intended applications in presence of agro and environment planning as a selection of the criteria and utility functions involved in bayesian, multi-criteria or mixed decision systems. Modest consideration is made of changing in climate, and on the production and commercial systems, and on others as social and financial.
Resumo:
A Probabilistic Safety Assessment (PSA) is being developed for a steam-methane reforming hydrogen production plant linked to a High-Temperature Gas Cooled Nuclear Reactor (HTGR). This work is based on the Japan Atomic Energy Research Institute’s (JAERI) High Temperature Test Reactor (HTTR) prototype in Japan. This study has two major objectives: calculate the risk to onsite and offsite individuals, and calculate the frequency of different types of damage to the complex. A simplified HAZOP study was performed to identify initiating events, based on existing studies. The initiating events presented here are methane pipe break, helium pipe break, and PPWC heat exchanger pipe break. Generic data was used for the fault tree analysis and the initiating event frequency. Saphire was used for the PSA analysis. The results show that the average frequency of an accident at this complex is 2.5E-06, which is divided into the various end states. The dominant sequences result in graphite oxidation which does not pose a health risk to the population. The dominant sequences that could affect the population are those that result in a methane explosion and occur 6.6E-8/year, while the other sequences are much less frequent. The health risk presents itself if there are people in the vicinity who could be affected by the explosion. This analysis also demonstrates that an accident in one of the plants has little effect on the other. This is true given the design base distance between the plants, the fact that the reactor is underground, as well as other safety characteristics of the HTGR. Sensitivity studies are being performed in order to determine where additional and improved data is needed.
Resumo:
The technique of Abstract Interpretation [11] has allowed the development of sophisticated program analyses which are provably correct and practical. The semantic approximations produced by such analyses have been traditionally applied to optimization during program compilation. However, recently, novel and promising applications of semantic approximations have been proposed in the more general context of program validation and debugging [3,9,7].
Resumo:
This paper studies feature subset selection in classification using a multiobjective estimation of distribution algorithm. We consider six functions, namely area under ROC curve, sensitivity, specificity, precision, F1 measure and Brier score, for evaluation of feature subsets and as the objectives of the problem. One of the characteristics of these objective functions is the existence of noise in their values that should be appropriately handled during optimization. Our proposed algorithm consists of two major techniques which are specially designed for the feature subset selection problem. The first one is a solution ranking method based on interval values to handle the noise in the objectives of this problem. The second one is a model estimation method for learning a joint probabilistic model of objectives and variables which is used to generate new solutions and advance through the search space. To simplify model estimation, l1 regularized regression is used to select a subset of problem variables before model learning. The proposed algorithm is compared with a well-known ranking method for interval-valued objectives and a standard multiobjective genetic algorithm. Particularly, the effects of the two new techniques are experimentally investigated. The experimental results show that the proposed algorithm is able to obtain comparable or better performance on the tested datasets.
Resumo:
This paper proposes a new multi-objective estimation of distribution algorithm (EDA) based on joint modeling of objectives and variables. This EDA uses the multi-dimensional Bayesian network as its probabilistic model. In this way it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learnt between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm to find better trade-off solutions to the multi-objective problem. In addition to Pareto set approximation, the algorithm is also able to estimate the structure of the multi-objective problem. To apply the algorithm to many-objective problems, the algorithm includes four different ranking methods proposed in the literature for this purpose. The algorithm is applied to the set of walking fish group (WFG) problems, and its optimization performance is compared with an evolutionary algorithm and another multi-objective EDA. The experimental results show that the proposed algorithm performs significantly better on many of the problems and for different objective space dimensions, and achieves comparable results on some compared with the other algorithms.
Resumo:
The selection of predefined analytic grids (partitions of the numeric ranges) to represent input and output functions as histograms has been proposed as a mechanism of approximation in order to control the tradeoff between accuracy and computation times in several áreas ranging from simulation to constraint solving. In particular, the application of interval methods for probabilistic function characterization has been shown to have advantages over other methods based on the simulation of random samples. However, standard interval arithmetic has always been used for the computation steps. In this paper, we introduce an alternative approximate arithmetic aimed at controlling the cost of the interval operations. Its distinctive feature is that grids are taken into account by the operators. We apply the technique in the context of probability density functions in order to improve the accuracy of the probability estimates. Results show that this approach has advantages over existing approaches in some particular situations, although computation times tend to increase significantly when analyzing large functions.
Resumo:
Abstract is not available.