900 resultados para BAYESIAN-INFERENCE
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A methodology to define favorable areas in petroleum and mineral exploration is applied, which consists in weighting the exploratory variables, in order to characterize their importance as exploration guides. The exploration data are spatially integrated in the selected area to establish the association between variables and deposits, and the relationships among distribution, topology, and indicator pattern of all variables. Two methods of statistical analysis were compared. The first one is the Weights of Evidence Modeling, a conditional probability approach (Agterberg, 1989a), and the second one is the Principal Components Analysis (Pan, 1993). In the conditional method, the favorability estimation is based on the probability of deposit and variable joint occurrence, with the weights being defined as natural logarithms of likelihood ratios. In the multivariate analysis, the cells which contain deposits are selected as control cells and the weights are determined by eigendecomposition, being represented by the coefficients of the eigenvector related to the system's largest eigenvalue. The two techniques of weighting and complementary procedures were tested on two case studies: 1. Recôncavo Basin, Northeast Brazil (for Petroleum) and 2. Itaiacoca Formation of Ribeira Belt, Southeast Brazil (for Pb-Zn Mississippi Valley Type deposits). The applied methodology proved to be easy to use and of great assistance to predict the favorability in large areas, particularly in the initial phase of exploration programs. © 1998 International Association for Mathematical Geology.
Resumo:
In this article we describe a feature extraction algorithm for pattern classification based on Bayesian Decision Boundaries and Pruning techniques. The proposed method is capable of optimizing MLP neural classifiers by retaining those neurons in the hidden layer that realy contribute to correct classification. Also in this article we proposed a method which defines a plausible number of neurons in the hidden layer based on the stem-and-leaf graphics of training samples. Experimental investigation reveals the efficiency of the proposed method. © 2002 IEEE.
Resumo:
This paper presents a new methodology for the adjustment of fuzzy inference systems. A novel approach, which uses unconstrained optimization techniques, is developed in order to adjust the free parameters of the fuzzy inference system, such as its intrinsic parameters of the membership function and the weights of the inference rules. This methodology is interesting, not only for the results presented and obtained through computer simulations, but also for its generality concerning to the kind of fuzzy inference system used. Therefore, this methodology is expandable either to the Mandani architecture or also to that suggested by Takagi-Sugeno. The validation of the presented methodology is accomplished through an estimation of time series. More specifically, the Mackey-Glass chaotic time series estimation is used for the validation of the proposed methodology.
Resumo:
This paper presents a new methodology for the adjustment of fuzzy inference systems, which uses technique based on error back-propagation method. The free parameters of the fuzzy inference system, such as its intrinsic parameters of the membership function and the weights of the inference rules, are automatically adjusted. This methodology is interesting, not only for the results presented and obtained through computer simulations, but also for its generality concerning to the kind of fuzzy inference system used. Therefore, this methodology is expandable either to the Mandani architecture or also to that suggested by Takagi-Sugeno. The validation of the presented methodology is accomplished through estimation of time series and by a mathematical modeling problem. More specifically, the Mackey-Glass chaotic time series is used for the validation of the proposed methodology. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
This paper proposes a fuzzy classification system for the risk of infestation by weeds in agricultural zones considering the variability of weeds. The inputs of the system are features of the infestation extracted from estimated maps by kriging for the weed seed production and weed coverage, and from the competitiveness, inferred from narrow and broad-leaved weeds. Furthermore, a Bayesian network classifier is used to extract rules from data which are compared to the fuzzy rule set obtained on the base of specialist knowledge. Results for the risk inference in a maize crop field are presented and evaluated by the estimated yield loss. © 2009 IEEE.
Resumo:
The system reliability depends on the reliability of its components itself. Therefore, it is necessary a methodology capable of inferring the state of functionality of these components to establish reliable indices of quality. Allocation models for maintenance and protective devices, among others, have been used in order to improve the quality and availability of services on electric power distribution systems. This paper proposes a methodology for assessing the reliability of distribution system components in an integrated way, using probabilistic models and fuzzy inference systems to infer about the operation probability of each component. © 2012 IEEE.
Resumo:
Considering the importance of monitoring the water quality parameters, remote sensing is a practicable alternative to limnological variables detection, which interacts with electromagnetic radiation, called optically active components (OAC). Among these, the phytoplankton pigment chlorophyll a is the most representative pigment of photosynthetic activity in all classes of algae. In this sense, this work aims to develop a method of spatial inference of chlorophyll a concentration using Artificial Neural Networks (ANN). To achieve this purpose, a multispectral image and fluorometric measurements were used as input data. The multispectral image was processed and the net training and validation dataset were carefully chosen. From this, the neural net architecture and its parameters were defined to model the variable of interest. In the end of training phase, the trained network was applied to the image and a qualitative analysis was done. Thus, it was noticed that the integration of fluorometric and multispectral data provided good results in the chlorophyll a inference, when combined in a structure of artificial neural networks.
Resumo:
The use of saturated two-level designs is very popular, especially in industrial applications where the cost of experiments is too high. Standard classical approaches are not appropriate to analyze data from saturated designs, since we could only get the estimates of the main factor effects and we would not have degrees of freedom to estimate the variance of the error. In this paper, we propose the use of empirical Bayesian procedures to get inferences for data obtained from saturated designs. The proposed methodology is illustrated assuming a simulated data set. © 2013 Growing Science Ltd. All rights reserved.
Resumo:
The exponential-logarithmic is a new lifetime distribution with decreasing failure rate and interesting applications in the biological and engineering sciences. Thus, a Bayesian analysis of the parameters would be desirable. Bayesian estimation requires the selection of prior distributions for all parameters of the model. In this case, researchers usually seek to choose a prior that has little information on the parameters, allowing the data to be very informative relative to the prior information. Assuming some noninformative prior distributions, we present a Bayesian analysis using Markov Chain Monte Carlo (MCMC) methods. Jeffreys prior is derived for the parameters of exponential-logarithmic distribution and compared with other common priors such as beta, gamma, and uniform distributions. In this article, we show through a simulation study that the maximum likelihood estimate may not exist except under restrictive conditions. In addition, the posterior density is sometimes bimodal when an improper prior density is used. © 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
Insect pest phylogeography might be shaped both by biogeographic events and by human influence. Here, we conducted an approximate Bayesian computation (ABC) analysis to investigate the phylogeography of the New World screwworm fly, Cochliomyia hominivorax, with the aim of understanding its population history and its order and time of divergence. Our ABC analysis supports that populations spread from North to South in the Americas, in at least two different moments. The first split occurred between the North/Central American and South American populations in the end of the Last Glacial Maximum (15,300-19,000 YBP). The second split occurred between the North and South Amazonian populations in the transition between the Pleistocene and the Holocene eras (9,100-11,000 YBP). The species also experienced population expansion. Phylogenetic analysis likewise suggests this north to south colonization and Maxent models suggest an increase in the number of suitable areas in South America from the past to present. We found that the phylogeographic patterns observed in C. hominivorax cannot be explained only by climatic oscillations and can be connected to host population histories. Interestingly we found these patterns are very coincident with general patterns of ancient human movements in the Americas, suggesting that humans might have played a crucial role in shaping the distribution and population structure of this insect pest. This work presents the first hypothesis test regarding the processes that shaped the current phylogeographic structure of C. hominivorax and represents an alternate perspective on investigating the problem of insect pests. © 2013 Fresia et al.
Resumo:
Pós-graduação em Matematica Aplicada e Computacional - FCT
Resumo:
The multivariate t models are symmetric and with heavier tail than the normal distribution, important feature in financial data. In this theses is presented the Bayesian estimation of a dynamic factor model, where the factors follow a multivariate autoregressive model, using multivariate t distribution. Since the multivariate t distribution is complex, it was represented in this work as a mix between a multivariate normal distribution and a square root of a chi-square distribution. This method allowed to define the posteriors. The inference on the parameters was made taking a sample of the posterior distribution, through the Gibbs Sampler. The convergence was verified through graphical analysis and the convergence tests Geweke (1992) and Raftery & Lewis (1992a). The method was applied in simulated data and in the indexes of the major stock exchanges in the world.
Resumo:
Pós-graduação em Matematica Aplicada e Computacional - FCT