965 resultados para BAYESIAN-ESTIMATION
Resumo:
The rock-wallaby genus Petrogale comprises a group of habitat-specialist macropodids endemic to Australia. Their restriction to rocky outcrops, with infrequent interpopulation dispersal, has been suggested as the cause of their recent and rapid diversification. Molecular phylogenetic relationships within and among species of Petrogale were analysed using mitochondrial (cytochrome oxidase c subunit 1, cytochrome b. NADH dehydrogenase subunit 2) and nuclear (omega-globin intron, breast and ovarian cancer susceptibility gene) sequence data with representatives that encompassed the morphological and chromosomal variation within the genus, including for the first time both Petrogale concinna and Petrogale purpureicollis. Four distinct lineages were identified, (1) the brachyotis group, (2) Petrogale persephone, (3) Petrogale xanthopus and (4) the lateralis-penicillata group. Three of these lineages include taxa with the ancestral karyotype (2n = 22). Paraphyletic relationships within the brachyotis group indicate the need for a focused phylogeographic study. There was support for P. purpureicollis being reinstated as a full species and P. concinna being placed within Petrogale rather than in the monotypic genus Peradorcas. Bayesian analyses of divergence times suggest that episodes of diversification commenced in the late Miocene-Pliocene and continued throughout the Pleistocene. Ancestral state reconstructions suggest that Petrogale originated in a mesic environment and dispersed into more arid environments, events that correlate with the timing of radiations in other arid zone vertebrate taxa across Australia. Crown Copyright (C) 2011 Published by Elsevier Inc. All rights reserved.
Resumo:
We propose a new general Bayesian latent class model for evaluation of the performance of multiple diagnostic tests in situations in which no gold standard test exists based on a computationally intensive approach. The modeling represents an interesting and suitable alternative to models with complex structures that involve the general case of several conditionally independent diagnostic tests, covariates, and strata with different disease prevalences. The technique of stratifying the population according to different disease prevalence rates does not add further marked complexity to the modeling, but it makes the model more flexible and interpretable. To illustrate the general model proposed, we evaluate the performance of six diagnostic screening tests for Chagas disease considering some epidemiological variables. Serology at the time of donation (negative, positive, inconclusive) was considered as a factor of stratification in the model. The general model with stratification of the population performed better in comparison with its concurrents without stratification. The group formed by the testing laboratory Biomanguinhos FIOCRUZ-kit (c-ELISA and rec-ELISA) is the best option in the confirmation process by presenting false-negative rate of 0.0002% from the serial scheme. We are 100% sure that the donor is healthy when these two tests have negative results and he is chagasic when they have positive results.
Resumo:
In this study we analyzed the phylogeographic pattern and historical demography of an endemic Atlantic forest (AF) bird, Basileuterus leucoblepharus, and test the influence of the last glacial maximum (LGM) on its population effective size using coalescent simulations. We address two main questions: (i) Does B. leucoblepharus present population genetic structure congruent with the patterns observed for other AF organisms? (ii) How did the LGM affect the effective population size of B. leucoblepharus? We sequenced 914 bp of the mitochondrial gene cytochrome b and 512 bp of the nuclear intron 5 of beta-fibrinogen of 62 individuals from 15 localities along the AF. Both molecular markers revealed no genetic structure in B. leucoblepharus. Neutrality tests based on both loci showed significant demographic expansion. The extended Bayesian skyline plot showed that the species seems to have experienced demographic expansion starting around 300,000 years ago, during the late Pleistocene. This date does not coincide with the LGM and the dynamics of population size showed stability during the LGM. To further test the effect of the LGM on this species, we simulated seven demographic scenarios to explore whether populations suffered specific bottlenecks. The scenarios most congruent with our data were population stability during the LGM with bottlenecks older than this period. This is the first example of an AF organism that does not show phylogeographic breaks caused by vicariant events associated to climate change and geotectonic activities in the Quaternary. Differential ecological, environmental tolerances and habitat requirements are possibly influencing the different evolutionary histories of these organisms. Our results show that the history of organism diversification in this megadiverse Neotropical forest is complex. Crown Copyright (c) 2012 Published by Elsevier Inc. All rights reserved.
Resumo:
Tuber borchii (Ascomycota, order Pezizales) is highly valued truffle sold in local markets in Italy. Despite its economic importance, knowledge on its distribution and population variation is scarce. The objective of this work was to investigate the evolutionary forces shaping the genetic structure of this fungus using coalescent and phylogenetic methods to reconstruct the evolutionary history of populations in Italy. To assess population structure, 61 specimens were collected from 11 different Provinces of Italy. Sampling was stratified across hosts and habitats to maximize coverage in native oak and pine stands and both mychorrizae and fruiting bodies were collected. Samples were identified considering anatomo-morphological characters. DNA was extracted and both multilocus (AFLP) and single-locus (18 loci from rDNA, nDNA, and mtDNA) approaches were used to look for polymorphisms. Screening AFLP profiles, both Jaccard and Dice coefficients of similarity were utilized to transform binary matrix into a distance matrix and then to desume Neighbour-Joining trees. Though these are only preliminary examinations, phylogenetic trees were totally concordant with those deriving from single locus analyses. Phylogenetic analyses of the nuclear loci were performed using maximum likelihood with PAUP and a combined phylogenetic inference, using Bayesian estimation with all nuclear gene regions, was carried out. To reconstruct the evolutionary history, we estimated recurrent migration, migration across the history of the sample, and estimated the mutation and approximate age of mutations in each tree using SNAP Workbench. The combined phylogenetic tree using Bayesian estimation suggests that there are two main haplotypes that are difficult to be differentiated on the basis of morphology, of ecological parameters and symbiontic tree. Between these two lineages, that occur in sympatry within T. borchii populations, there is no evidence of recurrent migration. However, migration over the history of the sample was asymmetrical suggesting that isolation was a result of interrupted gene flow followed by range expansion. Low levels of divergence between the haplotypes indicate that there are likely to be two cryptic species within the T. borchii population sampled. Our results suggest that isolation between populations of T. borchii could have led to reproductive isolation between two lineages. This isolation is likely due to sympatric speciation caused by a multiple colonization from different refugia or a recent isolation. In attempting to determinate whether these haplotypes represent separate species or a partition of the same species we applied Biological and Mechanistic species Concepts. Notwithstanding, further analyses are necessary to evaluate if selection favoured premating or post-mating isolation.
Resumo:
The aim of the thesis is to propose a Bayesian estimation through Markov chain Monte Carlo of multidimensional item response theory models for graded responses with complex structures and correlated traits. In particular, this work focuses on the multiunidimensional and the additive underlying latent structures, considering that the first one is widely used and represents a classical approach in multidimensional item response analysis, while the second one is able to reflect the complexity of real interactions between items and respondents. A simulation study is conducted to evaluate the parameter recovery for the proposed models under different conditions (sample size, test and subtest length, number of response categories, and correlation structure). The results show that the parameter recovery is particularly sensitive to the sample size, due to the model complexity and the high number of parameters to be estimated. For a sufficiently large sample size the parameters of the multiunidimensional and additive graded response models are well reproduced. The results are also affected by the trade-off between the number of items constituting the test and the number of item categories. An application of the proposed models on response data collected to investigate Romagna and San Marino residents' perceptions and attitudes towards the tourism industry is also presented.
Resumo:
Methods for tracking an object have generally fallen into two groups: tracking by detection and tracking through local optimization. The advantage of detection-based tracking is its ability to deal with target appearance and disappearance, but it does not naturally take advantage of target motion continuity during detection. The advantage of local optimization is efficiency and accuracy, but it requires additional algorithms to initialize tracking when the target is lost. To bridge these two approaches, we propose a framework for unified detection and tracking as a time-series Bayesian estimation problem. The basis of our approach is to treat both detection and tracking as a sequential entropy minimization problem, where the goal is to determine the parameters describing a target in each frame. To do this we integrate the Active Testing (AT) paradigm with Bayesian filtering, and this results in a framework capable of both detecting and tracking robustly in situations where the target object enters and leaves the field of view regularly. We demonstrate our approach on a retinal tool tracking problem and show through extensive experiments that our method provides an efficient and robust tracking solution.
Resumo:
Allocations of research funds across programs are often made for efficiency reasons. Social science research is shown to have small, lagged but significant effects on U.S. agricultural efficiency when public agricultural R&D and extension are simultaneously taken into account. Farm management and marketing research variables are used to explain variations in estimates of allocative and technical efficiency using a Bayesian approach that incorporates stylized facts concerning lagged research impacts in a way that is less restrictive than popular polynomial distributed lags. Results are reported in terms of means and standard deviations of estimated probability distributions of parameters and long-run total multipliers. Extension is estimated to have a greater impact on both allocative and technical efficiency than either R&D or social science research.
Resumo:
This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven-variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non-stationary, stationary and error-correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non-stationary specification outperformed those of the stationary and error-correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error-correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak.
Resumo:
Using survey data on 157 large private Hungarian and Polish companies this paper investigates links between ownership structures and CEOs’ expectations with regard to sources of finance for investment. The Bayesian estimation is used to deal with the small sample restrictions, while classical methods provide robustness checks. We found a hump-shaped relationship between ownership concentration and expectations of relying on public equity. The latter is most likely for firms where the largest investor owns between 25 percent and 49 percent of shares, just below the legal control threshold. More profitable firms rely on retained earnings for their investment finance, consistent with the ‘pecking order’ theory of financing. Finally, firms for which the largest shareholder is a domestic institutional investor are more likely to borrow from domestic banks.
Resumo:
This dissertation considered the development of two papers, both related to mortality in Brazil. In the first article, "The context of mortality according to the three broad groups of causes of death in Brazilian capitals, 2000 and 2010", the objective was to analyze the mortality rate according to the three major groups of causes of death in Brazilian capitals. In the second article, "Typology and characteristics of mortality from external causes in the municipalities in the Northeast of Brazil, 2000 and 2010", it was built up a typology for the Northeastern municipalities taking into account information on mortality from external causes and a set of indicators related to socioeconomic, demographic, and infrastructure aspects of such municipalities, both articles for the years 2000 and 2010. Thus, we used data from the Mortality Information System of the Ministry of Health. Furthermore, it was used information from the Demographic Census for those years. The variables relating to socioeconomic and demographic conditions used in this study were those available on the home page of the United Nations Program for Development. The variables relating to socioeconomic and demographic conditions used in this study were those available on the home page of the United Nations Program for Development. Was used in Article 1 the pro-rata distribution method to accomplish the redistribution of ill-defined causes. Moreover, made use of the technique of cluster analysis with the aim of grouping the capital that had proportions of deaths from ill-defined causes similar to each other. Already in Section 2, we used the technique of Empirical Bayesian estimation; spatial statistics technique; and finally, the Grade of Membership method to find types of municipalities from information on mortality from external causes associated with socioeconomic, demographic and infrastructure variables. As the main results, it stands out in Article 1, in relation to data quality, we observed the formation of four groups of similar capital between themselves, as the proportion of illdefined causes. Regarding the behavior of mortality, according to the three major groups of causes of death, it was noted both for 2000 and for 2010 the prevalence of deaths from noncommunicable diseases for both sexes, although the reduction was identified rates in some of the capitals. Communicable diseases stood out as the second cause of death among women. Also, we found that deaths due to external causes are responsible for the second cause of death among men, as well as presenting an increase among women. As for the Article 2, stands out, in general, not just an extension of mortality from external causes in the municipalities, as well as an enlargement of the configurator stain existence of external cause deaths for the whole area of Northeast. Regarding the typology of municipalities, three vi extreme profiles were buit: the profile 1, which comprises municipalities with high rates of mortality from external causes and the best social indicators; the profile 2, that was composed of municipalities that are characterized by having low mortality rates from external causes and the lowest social indicators; and the profile 3, that brings together municipalities with intermediate mortality rates and median values considered in relation to social indicators. Although we have not seen changes in the characteristics of the profiles, we observed an increase in the proportion of municipalities that belong to the extreme profile 3, taking into account the mixed profiles.
Resumo:
Les modèles incrémentaux sont des modèles statistiques qui ont été développés initialement dans le domaine du marketing. Ils sont composés de deux groupes, un groupe contrôle et un groupe traitement, tous deux comparés par rapport à une variable réponse binaire (le choix de réponses est « oui » ou « non »). Ces modèles ont pour but de détecter l’effet du traitement sur les individus à l’étude. Ces individus n’étant pas tous des clients, nous les appellerons : « prospects ». Cet effet peut être négatif, nul ou positif selon les caractéristiques des individus composants les différents groupes. Ce mémoire a pour objectif de comparer des modèles incrémentaux d’un point de vue bayésien et d’un point de vue fréquentiste. Les modèles incrémentaux utilisés en pratique sont ceux de Lo (2002) et de Lai (2004). Ils sont initialement réalisés d’un point de vue fréquentiste. Ainsi, dans ce mémoire, l’approche bayésienne est utilisée et comparée à l’approche fréquentiste. Les simulations sont e ectuées sur des données générées avec des régressions logistiques. Puis, les paramètres de ces régressions sont estimés avec des simulations Monte-Carlo dans l’approche bayésienne et comparés à ceux obtenus dans l’approche fréquentiste. L’estimation des paramètres a une influence directe sur la capacité du modèle à bien prédire l’effet du traitement sur les individus. Nous considérons l’utilisation de trois lois a priori pour l’estimation des paramètres de façon bayésienne. Elles sont choisies de manière à ce que les lois a priori soient non informatives. Les trois lois utilisées sont les suivantes : la loi bêta transformée, la loi Cauchy et la loi normale. Au cours de l’étude, nous remarquerons que les méthodes bayésiennes ont un réel impact positif sur le ciblage des individus composant les échantillons de petite taille.
Resumo:
ABSTRACT. – Phylogenies and molecular clocks of the diatoms have largely been inferred from SSU rDNA sequences. A new phylogeny of diatoms was estimated using four gene markers SSU and LSU rDNA rbcL and psbA (total 4352 bp) with 42 diatom species. The four gene trees analysed with a maximum likelihood (ML) and Baysian (BI) analysis recovered a monophyletic origin of the new diatom classes with high bootstrap support, which has been controversial with single gene markers using single outgroups and alignments that do not take secondary structure of the SSU gene into account. The divergence time of the classes were calculated from a ML tree in the MultliDiv Time program using a Bayesian estimation allowing for simultaneous constraints from the fossil record and varying rates of molecular evolution of different branches in the phylogenetic tree. These divergence times are generally in agreement with those proposed by other clocks using single genes with the exception that the pennates appear much earlier and suggest a longer Cretaceous fossil record that has yet to be sampled. Ghost lineages (i.e. the discrepancy between first appearance (FA) and molecular clock age of origin from an extant taxon) were revealed in the pennate lineage, whereas those ghost lineages in the centric lineages previously reported by others are reviewed and referred to earlier literature.
Resumo:
ABSTRACT. – Phylogenies and molecular clocks of the diatoms have largely been inferred from SSU rDNA sequences. A new phylogeny of diatoms was estimated using four gene markers SSU and LSU rDNA rbcL and psbA (total 4352 bp) with 42 diatom species. The four gene trees analysed with a maximum likelihood (ML) and Baysian (BI) analysis recovered a monophyletic origin of the new diatom classes with high bootstrap support, which has been controversial with single gene markers using single outgroups and alignments that do not take secondary structure of the SSU gene into account. The divergence time of the classes were calculated from a ML tree in the MultliDiv Time program using a Bayesian estimation allowing for simultaneous constraints from the fossil record and varying rates of molecular evolution of different branches in the phylogenetic tree. These divergence times are generally in agreement with those proposed by other clocks using single genes with the exception that the pennates appear much earlier and suggest a longer Cretaceous fossil record that has yet to be sampled. Ghost lineages (i.e. the discrepancy between first appearance (FA) and molecular clock age of origin from an extant taxon) were revealed in the pennate lineage, whereas those ghost lineages in the centric lineages previously reported by others are reviewed and referred to earlier literature.
Resumo:
Many exchange rate papers articulate the view that instabilities constitute a major impediment to exchange rate predictability. In this thesis we implement Bayesian and other techniques to account for such instabilities, and examine some of the main obstacles to exchange rate models' predictive ability. We first consider in Chapter 2 a time-varying parameter model in which fluctuations in exchange rates are related to short-term nominal interest rates ensuing from monetary policy rules, such as Taylor rules. Unlike the existing exchange rate studies, the parameters of our Taylor rules are allowed to change over time, in light of the widespread evidence of shifts in fundamentals - for example in the aftermath of the Global Financial Crisis. Focusing on quarterly data frequency from the crisis, we detect forecast improvements upon a random walk (RW) benchmark for at least half, and for as many as seven out of 10, of the currencies considered. Results are stronger when we allow the time-varying parameters of the Taylor rules to differ between countries. In Chapter 3 we look closely at the role of time-variation in parameters and other sources of uncertainty in hindering exchange rate models' predictive power. We apply a Bayesian setup that incorporates the notion that the relevant set of exchange rate determinants and their corresponding coefficients, change over time. Using statistical and economic measures of performance, we first find that predictive models which allow for sudden, rather than smooth, changes in the coefficients yield significant forecast improvements and economic gains at horizons beyond 1-month. At shorter horizons, however, our methods fail to forecast better than the RW. And we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients variability to incorporate in the models, as the main factors obstructing predictive ability. Chapter 4 focus on the problem of the time-varying predictive ability of economic fundamentals for exchange rates. It uses bootstrap-based methods to uncover the time-specific conditioning information for predicting fluctuations in exchange rates. Employing several metrics for statistical and economic evaluation of forecasting performance, we find that our approach based on pre-selecting and validating fundamentals across bootstrap replications generates more accurate forecasts than the RW. The approach, known as bumping, robustly reveals parsimonious models with out-of-sample predictive power at 1-month horizon; and outperforms alternative methods, including Bayesian, bagging, and standard forecast combinations. Chapter 5 exploits the predictive content of daily commodity prices for monthly commodity-currency exchange rates. It builds on the idea that the effect of daily commodity price fluctuations on commodity currencies is short-lived, and therefore harder to pin down at low frequencies. Using MIxed DAta Sampling (MIDAS) models, and Bayesian estimation methods to account for time-variation in predictive ability, the chapter demonstrates the usefulness of suitably exploiting such short-lived effects in improving exchange rate forecasts. It further shows that the usual low-frequency predictors, such as money supplies and interest rates differentials, typically receive little support from the data at monthly frequency, whereas MIDAS models featuring daily commodity prices are highly likely. The chapter also introduces the random walk Metropolis-Hastings technique as a new tool to estimate MIDAS regressions.
Resumo:
Les modèles incrémentaux sont des modèles statistiques qui ont été développés initialement dans le domaine du marketing. Ils sont composés de deux groupes, un groupe contrôle et un groupe traitement, tous deux comparés par rapport à une variable réponse binaire (le choix de réponses est « oui » ou « non »). Ces modèles ont pour but de détecter l’effet du traitement sur les individus à l’étude. Ces individus n’étant pas tous des clients, nous les appellerons : « prospects ». Cet effet peut être négatif, nul ou positif selon les caractéristiques des individus composants les différents groupes. Ce mémoire a pour objectif de comparer des modèles incrémentaux d’un point de vue bayésien et d’un point de vue fréquentiste. Les modèles incrémentaux utilisés en pratique sont ceux de Lo (2002) et de Lai (2004). Ils sont initialement réalisés d’un point de vue fréquentiste. Ainsi, dans ce mémoire, l’approche bayésienne est utilisée et comparée à l’approche fréquentiste. Les simulations sont e ectuées sur des données générées avec des régressions logistiques. Puis, les paramètres de ces régressions sont estimés avec des simulations Monte-Carlo dans l’approche bayésienne et comparés à ceux obtenus dans l’approche fréquentiste. L’estimation des paramètres a une influence directe sur la capacité du modèle à bien prédire l’effet du traitement sur les individus. Nous considérons l’utilisation de trois lois a priori pour l’estimation des paramètres de façon bayésienne. Elles sont choisies de manière à ce que les lois a priori soient non informatives. Les trois lois utilisées sont les suivantes : la loi bêta transformée, la loi Cauchy et la loi normale. Au cours de l’étude, nous remarquerons que les méthodes bayésiennes ont un réel impact positif sur le ciblage des individus composant les échantillons de petite taille.