833 resultados para Probabilistic methodology


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Salmonella enterica serotypes Derby, Mbandaka, Montevideo, Livingstone, and Senftenberg were among the 10 most prevalent serotypes isolated from farm animals in England and Wales in 1999. These serotypes are of potential zoonotic relevance; however, there is currently no "gold standard" fingerprinting method for them. A collection of isolates representing the former serotypes and serotype Gold Coast were analyzed using plasmid profiling, pulsed-field gel electrophoresis (PFGE), and ribotyping. The success of the molecular methods in identifying DNA polymorphisms was different for each serotype. Plasmid profiling was particularly useful for serotype Derby isolates, and it also provided a good level of discrimination for serotype Senftenberg. For most serotypes, we observed a number of nontypeable plasmid-free strains, which represents a limitation of this technique. Fingerprinting of genomic DNA by ribotyping and PFGE produced a significant variation in results, depending on the serotype of the strain. Both PstI/SphI ribotyping and XbaI-PFGE provided a similar degree of strain differentiation for serotype Derby and serotype Senftenberg, only marginally lower than that achieved by plasmid profiling. Ribotyping was less sensitive than PFGE when applied to serotype Mbandaka or serotype Montevideo. Serotype Gold Coast isolates were found to be nontypeable by XbaI-PFGE, and a significant proportion of them were found to be plasmid free. A similar situation applies to a number of serotype Livingstone isolates which were nontypeable by plasmid profiling and/or PFGE. In summary, the serotype of the isolates has a considerable influence in deciding the best typing strategy; a single method cannot be relied upon for discriminating between strains, and a combination of typing methods allows further discrimination.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Chartered Institute of Building Service Engineers (CIBSE) produced a technical memorandum (TM36) presenting research on future climate impacting building energy use and thermal comfort. One climate projection for each of four CO2 emissions scenario were used in TM36, so providing a deterministic outlook. As part of the UK Climate Impacts Programme (UKCIP) probabilistic climate projections are being studied in relation to building energy simulation techniques. Including uncertainty in climate projections is considered an important advance to climate impacts modelling and is included in the latest UKCIP data (UKCP09). Incorporating the stochastic nature of these new climate projections in building energy modelling requires a significant increase in data handling and careful statistical interpretation of the results to provide meaningful conclusions. This paper compares the results from building energy simulations when applying deterministic and probabilistic climate data. This is based on two case study buildings: (i) a mixed-mode office building with exposed thermal mass and (ii) a mechanically ventilated, light-weight office building. Building (i) represents an energy efficient building design that provides passive and active measures to maintain thermal comfort. Building (ii) relies entirely on mechanical means for heating and cooling, with its light-weight construction raising concern over increased cooling loads in a warmer climate. Devising an effective probabilistic approach highlighted greater uncertainty in predicting building performance, depending on the type of building modelled and the performance factors under consideration. Results indicate that the range of calculated quantities depends not only on the building type but is strongly dependent on the performance parameters that are of interest. Uncertainty is likely to be particularly marked with regard to thermal comfort in naturally ventilated buildings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inducing rules from very large datasets is one of the most challenging areas in data mining. Several approaches exist to scaling up classification rule induction to large datasets, namely data reduction and the parallelisation of classification rule induction algorithms. In the area of parallelisation of classification rule induction algorithms most of the work has been concentrated on the Top Down Induction of Decision Trees (TDIDT), also known as the ‘divide and conquer’ approach. However powerful alternative algorithms exist that induce modular rules. Most of these alternative algorithms follow the ‘separate and conquer’ approach of inducing rules, but very little work has been done to make the ‘separate and conquer’ approach scale better on large training data. This paper examines the potential of the recently developed blackboard based J-PMCRI methodology for parallelising modular classification rule induction algorithms that follow the ‘separate and conquer’ approach. A concrete implementation of the methodology is evaluated empirically on very large datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract: Long-term exposure of skylarks to a fictitious insecticide and of wood mice to a fictitious fungicide were modelled probabilistically in a Monte Carlo simulation. Within the same simulation the consequences of exposure to pesticides on reproductive success were modelled using the toxicity-exposure-linking rules developed by R.S. Bennet et al. (2005) and the interspecies extrapolation factors suggested by R. Luttik et al.(2005). We built models to reflect a range of scenarios and as a result were able to show how exposure to pesticide might alter the number of individuals engaged in any given phase of the breeding cycle at any given time and predict the numbers of new adults at the season’s end.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Twitter network has been labelled the most commonly used microblogging application around today. With about 500 million estimated registered users as of June, 2012, Twitter has become a credible medium of sentiment/opinion expression. It is also a notable medium for information dissemination; including breaking news on diverse issues since it was launched in 2007. Many organisations, individuals and even government bodies follow activities on the network in order to obtain knowledge on how their audience reacts to tweets that affect them. We can use postings on Twitter (known as tweets) to analyse patterns associated with events by detecting the dynamics of the tweets. A common way of labelling a tweet is by including a number of hashtags that describe its contents. Association Rule Mining can find the likelihood of co-occurrence of hashtags. In this paper, we propose the use of temporal Association Rule Mining to detect rule dynamics, and consequently dynamics of tweets. We coined our methodology Transaction-based Rule Change Mining (TRCM). A number of patterns are identifiable in these rule dynamics including, new rules, emerging rules, unexpected rules and ?dead' rules. Also the linkage between the different types of rule dynamics is investigated experimentally in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are several scoring rules that one can choose from in order to score probabilistic forecasting models or estimate model parameters. Whilst it is generally agreed that proper scoring rules are preferable, there is no clear criterion for preferring one proper scoring rule above another. This manuscript compares and contrasts some commonly used proper scoring rules and provides guidance on scoring rule selection. In particular, it is shown that the logarithmic scoring rule prefers erring with more uncertainty, the spherical scoring rule prefers erring with lower uncertainty, whereas the other scoring rules are indifferent to either option.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three wind gust estimation (WGE) methods implemented in the numerical weather prediction (NWP) model COSMO-CLM are evaluated with respect to their forecast quality using skill scores. Two methods estimate gusts locally from mean wind speed and the turbulence state of the atmosphere, while the third one considers the mixing-down of high momentum within the planetary boundary layer (WGE Brasseur). One hundred and fifty-eight windstorms from the last four decades are simulated and results are compared with gust observations at 37 stations in Germany. Skill scores reveal that the local WGE methods show an overall better behaviour, whilst WGE Brasseur performs less well except for mountain regions. The here introduced WGE turbulent kinetic energy (TKE) permits a probabilistic interpretation using statistical characteristics of gusts at observational sites for an assessment of uncertainty. The WGE TKE formulation has the advantage of a ‘native’ interpretation of wind gusts as result of local appearance of TKE. The inclusion of a probabilistic WGE TKE approach in NWP models has, thus, several advantages over other methods, as it has the potential for an estimation of uncertainties of gusts at observational sites.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a simple sieving methodology to aid the recovery of large cultigen pollen grains, such as maize (Zea mays L.), manioc (Manihot esculenta Crantz), and sweet potato (Ipomoea batatas L.), among others, for the detection of food production using fossil pollen analysis of lake sediments in the tropical Americas. The new methodology was tested on three large study lakes located next to known and/or excavated pre-Columbian archaeological sites in South and Central America. Five paired samples, one treated by sieving, the other prepared using standard methodology, were compared for each of the three sites. Using the new methodology, chemically digested sediment samples were passed through a 53 µm sieve, and the residue was retained, mounted in silicone oil, and counted for large cultigen pollen grains. The filtrate was mounted and analysed for pollen according to standard palynological procedures. Zea mays (L.) was recovered from the sediments of all three study lakes using the sieving technique, where no cultigen pollen had been previously recorded using the standard methodology. Confidence intervals demonstrate there is no significant difference in pollen assemblages between the sieved versus unsieved samples. Equal numbers of exotic Lycopodium spores added to both the filtrate and residue of the sieved samples allow for direct comparison of cultigen pollen abundance with the standard terrestrial pollen count. Our technique enables the isolation and rapid scanning for maize and other cultigen pollen in lake sediments, which, in conjunction with charcoal and pollen records, is key to determining land-use patterns and the environmental impact of pre-Columbian societies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contrails and especially their evolution into cirrus-like clouds are thought to have very important effects on local and global radiation budgets, though are generally not well represented in global climate models. Lack of contrail parameterisations is due to the limited availability of in situ contrail measurements which are difficult to obtain. Here we present a methodology for successful sampling and interpretation of contrail microphysical and radiative data using both in situ and remote sensing instrumentation on board the FAAM BAe146 UK research aircraft as part of the COntrails Spreading Into Cirrus (COSIC) study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose and demonstrate a fully probabilistic (Bayesian) approach to the detection of cloudy pixels in thermal infrared (TIR) imagery observed from satellite over oceans. Using this approach, we show how to exploit the prior information and the fast forward modelling capability that are typically available in the operational context to obtain improved cloud detection. The probability of clear sky for each pixel is estimated by applying Bayes' theorem, and we describe how to apply Bayes' theorem to this problem in general terms. Joint probability density functions (PDFs) of the observations in the TIR channels are needed; the PDFs for clear conditions are calculable from forward modelling and those for cloudy conditions have been obtained empirically. Using analysis fields from numerical weather prediction as prior information, we apply the approach to imagery representative of imagers on polar-orbiting platforms. In comparison with the established cloud-screening scheme, the new technique decreases both the rate of failure to detect cloud contamination and the false-alarm rate by one quarter. The rate of occurrence of cloud-screening-related errors of >1 K in area-averaged SSTs is reduced by 83%. Copyright © 2005 Royal Meteorological Society.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this article is to improve the communication of the probabilistic flood forecasts generated by hydrological ensemble prediction systems (HEPS) by understanding perceptions of different methods of visualizing probabilistic forecast information. This study focuses on interexpert communication and accounts for differences in visualization requirements based on the information content necessary for individual users. The perceptions of the expert group addressed in this study are important because they are the designers and primary users of existing HEPS. Nevertheless, they have sometimes resisted the release of uncertainty information to the general public because of doubts about whether it can be successfully communicated in ways that would be readily understood to nonexperts. In this article, we explore the strengths and weaknesses of existing HEPS visualization methods and thereby formulate some wider recommendations about the best practice for HEPS visualization and communication. We suggest that specific training on probabilistic forecasting would foster use of probabilistic forecasts with a wider range of applications. The result of a case study exercise showed that there is no overarching agreement between experts on how to display probabilistic forecasts and what they consider the essential information that should accompany plots and diagrams. In this article, we propose a list of minimum properties that, if consistently displayed with probabilistic forecasts, would make the products more easily understandable. Copyright © 2012 John Wiley & Sons, Ltd.