965 resultados para C11 - Bayesian Analysis


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective The Brazilian National Hansens Disease Control Program recently identified clusters with high disease transmission. Herein, we present different spatial analytical approaches to define highly vulnerable areas in one of these clusters. Method The study area included 373 municipalities in the four Brazilian states Maranha o, Para ', Tocantins and Piaui '. Spatial analysis was based on municipalities as the observation unit, considering the following disease indicators: (i) rate of new cases / 100 000 population, (ii) rate of cases < 15 years / 100 000 population, (iii) new cases with grade-2 disability / 100 000 population and (iv) proportion of new cases with grade-2 disabilities. We performed descriptive spatial analysis, local empirical Bayesian analysis and spatial scan statistic. Results A total of 254 (68.0%) municipalities were classified as hyperendemic (mean annual detection rates > 40 cases / 100 000 inhabitants). There was a concentration of municipalities with higher detection rates in Para ' and in the center of Maranha o. Spatial scan statistic identified 23 likely clusters of new leprosy case detection rates, most of them localized in these two states. These clusters included only 32% of the total population, but 55.4% of new leprosy cases. We also identified 16 significant clusters for the detection rate < 15 years and 11 likely clusters of new cases with grade-2. Several clusters of new cases with grade-2 / population overlap with those of new cases detection and detection of children < 15 years of age. The proportion of new cases with grade-2 did not reveal any significant clusters. Conclusions Several municipality clusters for high leprosy transmission and late diagnosis were identified in an endemic area using different statistical approaches. Spatial scan statistic is adequate to validate and confirm high-risk leprosy areas for transmission and late diagnosis, identified using descriptive spatial analysis and using local empirical Bayesian method. National and State leprosy control programs urgently need to intensify control actions in these highly vulnerable municipalities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Hepatitis B virus (HBV) infection is one of the most prevalent viral infections in humans and represents a serious public health problem. In Colombia, our group reported recently the presence of subgenotypes F3, A2 and genotype G in Bogota. The aim of this study was to characterize the HBV genotypes circulating in Quibdo, the largest Afro-descendant community in Colombia. Sixty HBsAg-positive samples were studied. A fragment of 1306 bp (S/POL) was amplified by nested PCR. Positive samples to S/POL fragment were submitted to PCR amplification of the HBV complete genome. Findings: The distribution of HBV genotypes was: A1 (52.17%), E (39.13%), D3 (4.3%) and F3/A1 (4.3%). An HBV recombinant strain subgenotype F3/A1 was found for the first time. Conclusions: This study is the first analysis of complete HBV genome sequences from Afro-Colombian population. It was found an important presence of HBV/A1 and HBV/E genotypes. A new recombinant strain of HBV genotype F3/A1 was reported in this population. This fact may be correlated with the introduction of these genotypes in the times of slavery.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The rise of evidence-based medicine as well as important progress in statistical methods and computational power have led to a second birth of the >200-year-old Bayesian framework. The use of Bayesian techniques, in particular in the design and interpretation of clinical trials, offers several substantial advantages over the classical statistical approach. First, in contrast to classical statistics, Bayesian analysis allows a direct statement regarding the probability that a treatment was beneficial. Second, Bayesian statistics allow the researcher to incorporate any prior information in the analysis of the experimental results. Third, Bayesian methods can efficiently handle complex statistical models, which are suited for advanced clinical trial designs. Finally, Bayesian statistics encourage a thorough consideration and presentation of the assumptions underlying an analysis, which enables the reader to fully appraise the authors' conclusions. Both Bayesian and classical statistics have their respective strengths and limitations and should be viewed as being complementary to each other; we do not attempt to make a head-to-head comparison, as this is beyond the scope of the present review. Rather, the objective of the present article is to provide a nonmathematical, reader-friendly overview of the current practice of Bayesian statistics coupled with numerous intuitive examples from the field of oncology. It is hoped that this educational review will be a useful resource to the oncologist and result in a better understanding of the scope, strengths, and limitations of the Bayesian approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Human social organization can deeply affect levels of genetic diversity. This fact implies that genetic information can be used to study social structures, which is the basis of ethnogenetics. Recently, methods have been developed to extract this information from genetic data gathered from subdivided populations that have gone through recent spatial expansions, which is typical of most human populations. Here, we perform a Bayesian analysis of mitochondrial and Y chromosome diversity in three matrilocal and three patrilocal groups from northern Thailand to infer the number of males and females arriving in these populations each generation and to estimate the age of their range expansion. We find that the number of male immigrants is 8 times smaller in patrilocal populations than in matrilocal populations, whereas women move 2.5 times more in patrilocal populations than in matrilocal populations. In addition to providing genetic quantification of sex-specific dispersal rates in human populations, we show that although men and women are exchanged at a similar rate between matrilocal populations, there are far fewer men than women moving into patrilocal populations. This finding is compatible with the hypothesis that men are strictly controlling male immigration and promoting female immigration in patrilocal populations and that immigration is much less regulated in matrilocal populations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Defining the pharmacokinetics of drugs in overdose is complicated. Deliberate self-poisoning is generally impulsive and associated with poor accuracy in dose history. In addition, early blood samples are rarely collected to characterize the whole plasma-concentration time profile and the effect of decontamination on the pharmacokinetics is uncertain. The aim of this study was to explore a fully Bayesian methodology for population pharmacokinetic analysis of data that arose from deliberate self-poisoning with citalopram. Prior information on the pharmacokinetic parameters was elicited from 14 published studies on citalopram when taken in therapeutic doses. The data set included concentration-time data from 53 patients studied after 63 citalopram overdose events (dose range: 20-1700 mg). Activated charcoal was administered between 0.5 and 4 h after 17 overdose events. The clinical investigator graded the veracity of the patients' dosing history on a 5-point ordinal scale. Inclusion of informative priors stabilised the pharmacokinetic model and the population mean values could be estimated well. There were no indications of non-linear clearance after excessive doses. The final model included an estimated uncertainty of the dose amount which in a simulation study was shown to not affect the model's ability to characterise the effects of activated charcoal. The effect of activated charcoal on clearance and bioavailability was pronounced and resulted in a 72% increase and 22% decrease, respectively. These findings suggest charcoal administration is potentially beneficial after citalopram overdose. The methodology explored seems promising for exploring the dose-exposure relationship in the toxicological settings.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this report is to describe the use of WinBUGS for two datasets that arise from typical population pharmacokinetic studies. The first dataset relates to gentamicin concentration-time data that arose as part of routine clinical care of 55 neonates. The second dataset incorporated data from 96 patients receiving enoxaparin. Both datasets were originally analyzed by using NONMEM. In the first instance, although NONMEM provided reasonable estimates of the fixed effects parameters it was unable to provide satisfactory estimates of the between-subject variance. In the second instance, the use of NONMEM resulted in the development of a successful model, albeit with limited available information on the between-subject variability of the pharmacokinetic parameters. WinBUGS was used to develop a model for both of these datasets. Model comparison for the enoxaparin dataset was performed by using the posterior distribution of the log-likelihood and a posterior predictive check. The use of WinBUGS supported the same structural models tried in NONMEM. For the gentamicin dataset a one-compartment model with intravenous infusion was developed, and the population parameters including the full between-subject variance-covariance matrix were available. Analysis of the enoxaparin dataset supported a two compartment model as superior to the one-compartment model, based on the posterior predictive check. Again, the full between-subject variance-covariance matrix parameters were available. Fully Bayesian approaches using MCMC methods, via WinBUGS, can offer added value for analysis of population pharmacokinetic data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main goal of LISA Path finder (LPF) mission is to estimate the acceleration noise models of the overall LISA Technology Package (LTP) experiment on-board. This will be of crucial importance for the future space-based Gravitational-Wave (GW) detectors, like eLISA. Here, we present the Bayesian analysis framework to process the planned system identification experiments designed for that purpose. In particular, we focus on the analysis strategies to predict the accuracy of the parameters that describe the system in all degrees of freedom. The data sets were generated during the latest operational simulations organised by the data analysis team and this work is part of the LTPDA Matlab toolbox.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A Similar Exposure Group (SEG) can be created through the evaluation of workers performing the same or similar task, hazards they are exposed to, frequency and duration of their exposures, engineering controls available during their operations, personal protective equipment used, and exposure data. For this report, the samples of one facility that has collected nearly 40,000 various types of samples will be evaluated to determine if the creation of a SEG can be supported. The data will be reviewed for consistency with collection methods and laboratory detection limits. A subset of the samples may be selected based on the review. Data will also be statistically evaluated in order to determine whether the data is sufficient to terminate the sampling. IHDataAnalyst V1.27 will be used to assess the data. This program uses Bayesian Analysis to assist in making determinations. The 95 percent confidence interval will be calculated and evaluated in making decisions. This evaluation will be used to determine if a SEG can be created for any of the workers and determine the need for future sample collection. The data and evaluation presented in this report have been selected and evaluated specifically for the purposes of this project.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neste artigo apresentamos uma análise Bayesiana para o modelo de volatilidade estocástica (SV) e uma forma generalizada deste, cujo objetivo é estimar a volatilidade de séries temporais financeiras. Considerando alguns casos especiais dos modelos SV usamos algoritmos de Monte Carlo em Cadeias de Markov e o software WinBugs para obter sumários a posteriori para as diferentes formas de modelos SV. Introduzimos algumas técnicas Bayesianas de discriminação para a escolha do melhor modelo a ser usado para estimar as volatilidades e fazer previsões de séries financeiras. Um exemplo empírico de aplicação da metodologia é introduzido com a série financeira do IBOVESPA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diagnostic methods have been an important tool in regression analysis to detect anomalies, such as departures from error assumptions and the presence of outliers and influential observations with the fitted models. Assuming censored data, we considered a classical analysis and Bayesian analysis assuming no informative priors for the parameters of the model with a cure fraction. A Bayesian approach was considered by using Markov Chain Monte Carlo Methods with Metropolis-Hasting algorithms steps to obtain the posterior summaries of interest. Some influence methods, such as the local influence, total local influence of an individual, local influence on predictions and generalized leverage were derived, analyzed and discussed in survival data with a cure fraction and covariates. The relevance of the approach was illustrated with a real data set, where it is shown that, by removing the most influential observations, the decision about which model best fits the data is changed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The Brazilian population is mainly descendant from European colonizers, Africans and Native Americans. Some Afro-descendants lived in small isolated communities since the slavery period. The epidemiological status of HBV infection in Quilombos communities from northeast of Brazil remains unknown. The aim of this study was to characterize the HBV genotypes circulating inside a Quilombo isolated community from Maranhao State, Brazil. Methods: Seventy-two samples from Frechal Quilombo community at Maranhao were collected. All serum samples were screened by enzyme-linked immunosorbent assays for the presence of hepatitis B surface antigen ( HBsAg). HBsAg positive samples were submitted to DNA extraction and a fragment of 1306 bp partially comprising HBsAg and polymerase coding regions (S/POL) was amplified by nested PCR and its nucleotide sequence was determined. Viral isolates were genotyped by phylogenetic analysis using reference sequences from each genotype obtained from GenBank (n = 320). Sequences were aligned using Muscle software and edited in the SE-AL software. Bayesian phylogenetic analyses were conducted using Markov Chain Monte Carlo (MCMC) method to obtain the MCC tree using BEAST v.1.5.3. Results: Of the 72 individuals, 9 (12.5%) were HBsAg-positive and 4 of them were successfully sequenced for the 1306 bp fragment. All these samples were genotype A1 and grouped together with other sequences reported from Brazil. Conclusions: The present study represents the first report on the HBV genotypes characterization of this community in the Maranhao state in Brazil where a high HBsAg frequency was found. In this study, we reported a high frequency of HBV infection and the exclusive presence of subgenotype A1 in an Afro-descendent community in the Maranhao State, Brazil.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a re-analysis of the Geneva-Copenhagen survey, which benefits from the infrared flux method to improve the accuracy of the derived stellar effective temperatures and uses the latter to build a consistent and improved metallicity scale. Metallicities are calibrated on high-resolution spectroscopy and checked against four open clusters and a moving group, showing excellent consistency. The new temperature and metallicity scales provide a better match to theoretical isochrones, which are used for a Bayesian analysis of stellar ages. With respect to previous analyses, our stars are on average 100 K hotter and 0.1 dex more metal rich, which shift the peak of the metallicity distribution function around the solar value. From Stromgren photometry we are able to derive for the first time a proxy for [alpha/Fe] abundances, which enables us to perform a tentative dissection of the chemical thin and thick disc. We find evidence for the latter being composed of an old, mildly but systematically alpha-enhanced population that extends to super solar metallicities, in agreement with spectroscopic studies. Our revision offers the largest existing kinematically unbiased sample of the solar neighbourhood that contains full information on kinematics, metallicities, and ages and thus provides better constraints on the physical processes relevant in the build-up of the Milky Way disc, enabling a better understanding of the Sun in a Galactic context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Survival models involving frailties are commonly applied in studies where correlated event time data arise due to natural or artificial clustering. In this paper we present an application of such models in the animal breeding field. Specifically, a mixed survival model with a multivariate correlated frailty term is proposed for the analysis of data from over 3611 Brazilian Nellore cattle. The primary aim is to evaluate parental genetic effects on the trait length in days that their progeny need to gain a commercially specified standard weight gain. This trait is not measured directly but can be estimated from growth data. Results point to the importance of genetic effects and suggest that these models constitute a valuable data analysis tool for beef cattle breeding.