951 resultados para Imprecise Dirichlet Model, Extreme Imprecise Dirichlet Model, Classification, TANC, Credal dominance


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Uncertainty plays an important role in water quality management problems. The major sources of uncertainty in a water quality management problem are the random nature of hydrologic variables and imprecision (fuzziness) associated with goals of the dischargers and pollution control agencies (PCA). Many Waste Load Allocation (WLA)problems are solved by considering these two sources of uncertainty. Apart from randomness and fuzziness, missing data in the time series of a hydrologic variable may result in additional uncertainty due to partial ignorance. These uncertainties render the input parameters as imprecise parameters in water quality decision making. In this paper an Imprecise Fuzzy Waste Load Allocation Model (IFWLAM) is developed for water quality management of a river system subject to uncertainty arising from partial ignorance. In a WLA problem, both randomness and imprecision can be addressed simultaneously by fuzzy risk of low water quality. A methodology is developed for the computation of imprecise fuzzy risk of low water quality, when the parameters are characterized by uncertainty due to partial ignorance. A Monte-Carlo simulation is performed to evaluate the imprecise fuzzy risk of low water quality by considering the input variables as imprecise. Fuzzy multiobjective optimization is used to formulate the multiobjective model. The model developed is based on a fuzzy multiobjective optimization problem with max-min as the operator. This usually does not result in a unique solution but gives multiple solutions. Two optimization models are developed to capture all the decision alternatives or multiple solutions. The objective of the two optimization models is to obtain a range of fractional removal levels for the dischargers, such that the resultant fuzzy risk will be within acceptable limits. Specification of a range for fractional removal levels enhances flexibility in decision making. The methodology is demonstrated with a case study of the Tunga-Bhadra river system in India.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate the ability of a global atmospheric general circulation model (AGCM) to reproduce observed 20 year return values of the annual maximum daily precipitation totals over the continental United States as a function of horizontal resolution. We find that at the high resolutions enabled by contemporary supercomputers, the AGCM can produce values of comparable magnitude to high quality observations. However, at the resolutions typical of the coupled general circulation models used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, the precipitation return values are severely underestimated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The introduction of casemix funding for Australian acute health care services has challenged Social Work to demonstrate clear reporting mechanisms, demonstrate effective practice and to justify interventions provided. The term 'casemix' is used to describe the mix and type of patients treated by a hospital or other health care services. There is wide acknowledgement that the procedure-based system of Diagnosis Related Groupings (DRGs) is grounded in a medical/illness perspective and is unsatisfactory in describing and predicting the activity of Social Work and other allied health professions in health care service delivery. The National Allied Health Casemix Committee was established in 1991 as the peak body to represent allied health professions in matters related to casemix classification. This Committee has pioneered a nationally consistent, patient-centred information system for allied health. This paper describes the classification systems and codes developed for Social Work, which includes a minimum data set, a classification hierarchy, the set of activity (input) codes and 'indicator for intervention' codes. The advantages and limitations of the system are also discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Land cover (LC) refers to what is actually present on the ground and provide insights into the underlying solution for improving the conditions of many issues, from water pollution to sustainable economic development. One of the greatest challenges of modeling LC changes using remotely sensed (RS) data is of scale-resolution mismatch: that the spatial resolution of detail is less than what is required, and that this sub-pixel level heterogeneity is important but not readily knowable. However, many pixels consist of a mixture of multiple classes. The solution to mixed pixel problem typically centers on soft classification techniques that are used to estimate the proportion of a certain class within each pixel. However, the spatial distribution of these class components within the pixel remains unknown. This study investigates Orthogonal Subspace Projection - an unmixing technique and uses pixel-swapping algorithm for predicting the spatial distribution of LC at sub-pixel resolution. Both the algorithms are applied on many simulated and actual satellite images for validation. The accuracy on the simulated images is ~100%, while IRS LISS-III and MODIS data show accuracy of 76.6% and 73.02% respectively. This demonstrates the relevance of these techniques for applications such as urban-nonurban, forest-nonforest classification studies etc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents an efficient approach to the modeling and classification of vehicles using the magnetic signature of the vehicle. A database was created using the magnetic signature collected over a wide range of vehicles(cars). A vehicle is modeled as an array of magnetic dipoles. The strength of the magnetic dipole and the separation between the magnetic dipoles varies for different vehicles and is dependent on the metallic composition and configuration of the vehicle. Based on the magnetic dipole data model, we present a novel method to extract a feature vector from the magnetic signature. In the classification of vehicles, a linear support vector machine configuration is used to classify the vehicles based on the obtained feature vectors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There are many popular models available for classification of documents like Naïve Bayes Classifier, k-Nearest Neighbors and Support Vector Machine. In all these cases, the representation is based on the “Bag of words” model. This model doesn't capture the actual semantic meaning of a word in a particular document. Semantics are better captured by proximity of words and their occurrence in the document. We propose a new “Bag of Phrases” model to capture this discriminative power of phrases for text classification. We present a novel algorithm to extract phrases from the corpus using the well known topic model, Latent Dirichlet Allocation(LDA), and to integrate them in vector space model for classification. Experiments show a better performance of classifiers with the new Bag of Phrases model against related representation models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents an efficient approach to the modeling and classification of vehicles using the magnetic signature of the vehicle. A database was created using the magnetic signature collected over a wide range of vehicles(cars). A sensor dependent approach called as Magnetic Field Angle Model is proposed for modeling the obtained magnetic signature. Based on the data model, we present a novel method to extract the feature vector from the magnetic signature. In the classification of vehicles, a linear support vector machine configuration is used to classify the vehicles based on the obtained feature vectors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Clock synchronization in wireless sensor networks (WSNs) assures that sensor nodes have the same reference clock time. This is necessary not only for various WSN applications but also for many system level protocols for WSNs such as MAC protocols, and protocols for sleep scheduling of sensor nodes. Clock value of a node at a particular instant of time depends on its initial value and the frequency of the crystal oscillator used in the sensor node. The frequency of the crystal oscillator varies from node to node, and may also change over time depending upon many factors like temperature, humidity, etc. As a result, clock values of different sensor nodes diverge from each other and also from the real time clock, and hence, there is a requirement for clock synchronization in WSNs. Consequently, many clock synchronization protocols for WSNs have been proposed in the recent past. These protocols differ from each other considerably, and so, there is a need to understand them using a common platform. Towards this goal, this survey paper categorizes the features of clock synchronization protocols for WSNs into three types, viz, structural features, technical features, and global objective features. Each of these categories has different options to further segregate the features for better understanding. The features of clock synchronization protocols that have been used in this survey include all the features which have been used in existing surveys as well as new features such as how the clock value is propagated, when the clock value is propagated, and when the physical clock is updated, which are required for better understanding of the clock synchronization protocols in WSNs in a systematic way. This paper also gives a brief description of a few basic clock synchronization protocols for WSNs, and shows how these protocols fit into the above classification criteria. In addition, the recent clock synchronization protocols for WSNs, which are based on the above basic clock synchronization protocols, are also given alongside the corresponding basic clock synchronization protocols. Indeed, the proposed model for characterizing the clock synchronization protocols in WSNs can be used not only for analyzing the existing protocols but also for designing new clock synchronization protocols. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The time series of abundance indices for many groundfish populations, as determined from trawl surveys, are often imprecise and short, causing stock assessment estimates of abundance to be imprecise. To improve precision, prior probability distributions (priors) have been developed for parameters in stock assessment models by using meta-analysis, expert judgment on catchability, and empirically based modeling. This article presents a synthetic approach for formulating priors for rockfish trawl survey catchability (qgross). A multivariate prior for qgross for different surveys is formulated by using 1) a correction factor for bias in estimating fish density between trawlable and untrawlable areas, 2) expert judgment on trawl net catchability, 3) observations from trawl survey experiments, and 4) data on the fraction of population biomass in each of the areas surveyed. The method is illustrated by using bocaccio (Sebastes paucipinis) in British Columbia. Results indicate that expert judgment can be updated markedly by observing the catch-rate ratio from different trawl gears in the same areas. The marginal priors for qgross are consistent with empirical estimates obtained by fitting a stock assessment model to the survey data under a noninformative prior for qgross. Despite high prior uncertainty (prior coefficients of variation ≥0.8) and high prior correlation between qgross, the prior for qgross still enhances the precision of key stock assessment quantities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract. Latent Dirichlet Allocation (LDA) is a document level language model. In general, LDA employ the symmetry Dirichlet distribution as prior of the topic-words’ distributions to implement model smoothing. In this paper, we propose a data-driven smoothing strategy in which probability mass is allocated from smoothing-data to latent variables by the intrinsic inference procedure of LDA. In such a way, the arbitrariness of choosing latent variables'priors for the multi-level graphical model is overcome. Following this data-driven strategy,two concrete methods, Laplacian smoothing and Jelinek-Mercer smoothing, are employed to LDA model. Evaluations on different text categorization collections show data-driven smoothing can significantly improve the performance in balanced and unbalanced corpora.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A common objective in learning a model from data is to recover its network structure, while the model parameters are of minor interest. For example, we may wish to recover regulatory networks from high-throughput data sources. In this paper we examine how Bayesian regularization using a Dirichlet prior over the model parameters affects the learned model structure in a domain with discrete variables. Surprisingly, a weak prior in the sense of smaller equivalent sample size leads to a strong regularization of the model structure (sparse graph) given a sufficiently large data set. In particular, the empty graph is obtained in the limit of a vanishing strength of prior belief. This is diametrically opposite to what one may expect in this limit, namely the complete graph from an (unregularized) maximum likelihood estimate. Since the prior affects the parameters as expected, the prior strength balances a "trade-off" between regularizing the parameters or the structure of the model. We demonstrate the benefits of optimizing this trade-off in the sense of predictive accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

X. Fu and Q. Shen. 'Knowledge representation for fuzzy model composition', in Proceedings of the 21st International Workshop on Qualitative Reasoning, 2007, pp. 47-54. Sponsorship: EPSRC