24 resultados para Bayesian hierarchical model
Resumo:
This paper provides additional validation to the problem of estimating wave spectra based on the first-order motions of a moored vessel. Prior investigations conducted by the authors have attested that even a large-volume ship, such as an FPSO unit, could be adopted for on-board estimation of the wave field. The obvious limitation of the methodology concerns filtering of high-frequency wave components, for which the vessel has no significant response. As a result, the estimation range is directly dependent on the characteristics of the vessel response. In order to extend this analysis, further small-scale tests were performed with a model of a pipe-laying crane-barge. When compared to the FPSO case, the results attest that a broader range of typical sea states can be accurately estimated, including crossed-sea states with low peak periods. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Changepoint regression models have originally been developed in connection with applications in quality control, where a change from the in-control to the out-of-control state has to be detected based on the avaliable random observations. Up to now various changepoint models have been suggested for differents applications like reliability, econometrics or medicine. In many practical situations the covariate cannot be measured precisely and an alternative model are the errors in variable regression models. In this paper we study the regression model with errors in variables with changepoint from a Bayesian approach. From the simulation study we found that the proposed procedure produces estimates suitable for the changepoint and all other model parameters.
Resumo:
We propose a new general Bayesian latent class model for evaluation of the performance of multiple diagnostic tests in situations in which no gold standard test exists based on a computationally intensive approach. The modeling represents an interesting and suitable alternative to models with complex structures that involve the general case of several conditionally independent diagnostic tests, covariates, and strata with different disease prevalences. The technique of stratifying the population according to different disease prevalence rates does not add further marked complexity to the modeling, but it makes the model more flexible and interpretable. To illustrate the general model proposed, we evaluate the performance of six diagnostic screening tests for Chagas disease considering some epidemiological variables. Serology at the time of donation (negative, positive, inconclusive) was considered as a factor of stratification in the model. The general model with stratification of the population performed better in comparison with its concurrents without stratification. The group formed by the testing laboratory Biomanguinhos FIOCRUZ-kit (c-ELISA and rec-ELISA) is the best option in the confirmation process by presenting false-negative rate of 0.0002% from the serial scheme. We are 100% sure that the donor is healthy when these two tests have negative results and he is chagasic when they have positive results.
Resumo:
To estimate causal relationships, time series econometricians must be aware of spurious correlation, a problem first mentioned by Yule (1926). To deal with this problem, one can work either with differenced series or multivariate models: VAR (VEC or VECM) models. These models usually include at least one cointegration relation. Although the Bayesian literature on VAR/VEC is quite advanced, Bauwens et al. (1999) highlighted that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results". The present article applies the Full Bayesian Significance Test (FBST), especially designed to deal with sharp hypotheses, to cointegration rank selection tests in VECM time series models. It shows the FBST implementation using both simulated and available (in the literature) data sets. As illustration, standard non informative priors are used.
Resumo:
The mechanisms responsible for containing activity in systems represented by networks are crucial in various phenomena, for example, in diseases such as epilepsy that affect the neuronal networks and for information dissemination in social networks. The first models to account for contained activity included triggering and inhibition processes, but they cannot be applied to social networks where inhibition is clearly absent. A recent model showed that contained activity can be achieved with no need of inhibition processes provided that the network is subdivided into modules (communities). In this paper, we introduce a new concept inspired in the Hebbian theory, through which containment of activity is achieved by incorporating a dynamics based on a decaying activity in a random walk mechanism preferential to the node activity. Upon selecting the decay coefficient within a proper range, we observed sustained activity in all the networks tested, namely, random, Barabasi-Albert and geographical networks. The generality of this finding was confirmed by showing that modularity is no longer needed if the dynamics based on the integrate-and-fire dynamics incorporated the decay factor. Taken together, these results provide a proof of principle that persistent, restrained network activation might occur in the absence of any particular topological structure. This may be the reason why neuronal activity does not spread out to the entire neuronal network, even when no special topological organization exists.
Resumo:
Multivariate analyses of UV-Vis spectral data from cachaca wood extracts provide a simple and robust model to classify aged Brazilian cachacas according to the wood species used in the maturation barrels. The model is based on inspection of 93 extracts of oak and different Brazilian wood species by a non-aged cachaca used as an extraction solvent. Application of PCA (Principal Components Analysis) and HCA (Hierarchical Cluster Analysis) leads to identification of 6 clusters of cachaca wood extracts (amburana, amendoim, balsamo, castanheira, jatoba, and oak). LDA (Linear Discriminant Analysis) affords classification of 10 different wood species used in the cachaca extracts (amburana, amendoim, balsamo, cabreuva-parda, canela-sassafras, castanheira, jatoba, jequitiba-rosa, louro-canela, and oak) with an accuracy ranging from 80% (amendoim and castanheira) to 100% (balsamo and jequitiba-rosa). The methodology provides a low-cost alternative to methods based on liquid chromatography and mass spectrometry to classify cachacas aged in barrels that are composed of different wood species.
Resumo:
Background: The temporal and geographical diversification of Neotropical insects remains poorly understood because of the complex changes in geological and climatic conditions that occurred during the Cenozoic. To better understand extant patterns in Neotropical biodiversity, we investigated the evolutionary history of three Neotropical swallowtail Troidini genera (Papilionidae). First, DNA-based species delimitation analyses were conducted to assess species boundaries within Neotropical Troidini using an enlarged fragment of the standard barcode gene. Molecularly delineated species were then used to infer a time-calibrated species-level phylogeny based on a three-gene dataset and Bayesian dating analyses. The corresponding chronogram was used to explore their temporal and geographical diversification through distinct likelihood-based methods. Results: The phylogeny for Neotropical Troidini was well resolved and strongly supported. Molecular dating and biogeographic analyses indicate that the extant lineages of Neotropical Troidini have a late Eocene (33-42 Ma) origin in North America. Two independent lineages (Battus and Euryades + Parides) reached South America via the GAARlandia temporary connection, and later became extinct in North America. They only began substantive diversification during the early Miocene in Amazonia. Macroevolutionary analysis supports the "museum model" of diversification, rather than Pleistocene refugia, as the best explanation for the diversification of these lineages. Conclusions: This study demonstrates that: (i) current Neotropical biodiversity may have originated ex situ; (ii) the GAARlandia bridge was important in facilitating invasions of South America; (iii) colonization of Amazonia initiated the crown diversification of these swallowtails; and (iv) Amazonia is not only a species-rich region but also acted as a sanctuary for the dynamics of this diversity. In particular, Amazonia probably allowed the persistence of old lineages and contributed to the steady accumulation of diversity over time with constant net diversification rates, a result that contrasts with previous studies on other South American butterflies.
Resumo:
Abstract Background The search for enriched (aka over-represented or enhanced) ontology terms in a list of genes obtained from microarray experiments is becoming a standard procedure for a system-level analysis. This procedure tries to summarize the information focussing on classification designs such as Gene Ontology, KEGG pathways, and so on, instead of focussing on individual genes. Although it is well known in statistics that association and significance are distinct concepts, only the former approach has been used to deal with the ontology term enrichment problem. Results BayGO implements a Bayesian approach to search for enriched terms from microarray data. The R source-code is freely available at http://blasto.iq.usp.br/~tkoide/BayGO in three versions: Linux, which can be easily incorporated into pre-existent pipelines; Windows, to be controlled interactively; and as a web-tool. The software was validated using a bacterial heat shock response dataset, since this stress triggers known system-level responses. Conclusion The Bayesian model accounts for the fact that, eventually, not all the genes from a given category are observable in microarray data due to low intensity signal, quality filters, genes that were not spotted and so on. Moreover, BayGO allows one to measure the statistical association between generic ontology terms and differential expression, instead of working only with the common significance analysis.
Resumo:
In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.