968 resultados para PERIODIC AVERAGING
Resumo:
Résumé de thèseLe syndrome de PFAPA est une maladie fébrile récurrente décrite pour la première fois en 1987 par Marshall et col. Elle est caractérisée par une fièvre périodique, une stomatite aphteuse, une pharyngite et des adénopathies. Ce syndrome débute dans les premières années de vie et est connu pour disparaître spontanément en principe avant l'adolescence. Hormis un traitement de prednisone en début de crise, aucun traitement n'a pu montrer une efficacité thérapeutique ou curative.L'origine et l'étiologie de cette maladie sont encore inconnues à ce jour et le diagnostic reste un diagnostic d'exclusion qui repose sur des critères définis par différents groupes depuis 1987. Dans le cadre du Working Party periodic fever de la Société Européenne de Rhumatologie pédiatrique (PreS), un groupe a été établi et celui-ci a mis en place un registre de patients atteints de PFAPA afin d'analyser cette maladie et de mieux définir les critères diagnostic. Le Dr Michael Hofer a été nommé chairman de ce groupe et a introduit rapidement les patients romands dans cet outil de travail.L'introduction des patients romands dans la base de données ainsi créée, nous a suggéré une susceptibilité familiale qui nous a poussés à investiguer ce point de manière plus approfondie. Nous avons donc regroupé tous les patients lausannois et ceux de collègues bordelais ayant un diagnostic avéré de PFAPA. Nous avons ensuite interrogé, au cours d'un entretien téléphonique, les familles de ces enfants grâce à un questionnaire standardisé. Celui-ci a été testé et validé sur des patients sains d'une consultation de pédiatrie générale.Nous avons ensuite réunie toutes ces informations et séparés les patients en deux groupes AF+ (anamnèse familiale positive pour une fièvre récurrente) et AF- (anamnèse familiale négative pour une fièvre récurrente). Nous avons établi des comparaisons entre les 2 différents groupes en reprenant les caractéristiques de ces patients depuis le registre PFAPA dans lequel ils sont tous inclus. Les analyses ont été contrôlées et validées par le centre d'épidémiologie clinique grâce aux méthodes statistiques reconnues.Les résultats obtenus et qui sont détaillés dans l'article, permettent de suspecter une origine familiale et par là même, potentiellement génétique, à cette maladie d'étiologie inconnue. Jusqu'à présent aucune prépondérance familiale n'avait pu être mise en évidence dans les autres études sur le sujet. Pourtant cette maladie fait partie du groupe des fièvres récurrentes qui ont pour beaucoup déjà un diagnostic génétique.Notre étude ouvre donc des perspectives non seulement de recherche sur l'éventuelle cause génétique mais pourrait également permettre une meilleure compréhension de la maladie, de ses diverses présentations ainsi que par la suite de nouvelles possibilités thérapeutiques.
Resumo:
A member of the Lutzomyia flaviscutellata complex from Rondônia and southern Amazonas States, Brazil, is so close to the Venezuelan Lutzomyia olmeca recuta Feliciangeli et al., 1988, that it is regarded as belonging to the same species. Since this phlebotomine co-extis with L. olmeca nociva in Brazil, the subspecific status of the former is untenable and is rased to specific rank, as Lutzomyia reducta. The Brazilian material is described and illustrated, and compared with specimens of L. o. nociva and L. flaviscutellata from the same area. Keys to the known taxa of the flaviscutellata complex are presented. Leishmania amazonensis was isolated from one heavily infected specimen of L. reducta, making this the third species of the flaviscutellata complex to be implicated as a vector of this parasite in Brazil. The relative abundance of the three sympatric flaviscutellata complex species varies locally and appears to be related to soil drainage. L. reducta constituted about 25% if all phlebotomines captured in Disney traps at poorly drained and well drained site, but appears not to coloniza areas subject to periodic flooding. L. olmeca nociva was restricted to poorly drained areas not subject to flooding, whereas L. flaviscutellata was ubiquitous L. reducta has never been detected north of the Amazon river in Brazil, but absence of recosrds from western and northwestern Amazonas State may reflect lack of collecting in these areas.
Resumo:
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting models as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output growth and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.
Resumo:
This paper develops methods for Stochastic Search Variable Selection (currently popular with regression and Vector Autoregressive models) for Vector Error Correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model.
Resumo:
This paper develops stochastic search variable selection (SSVS) for zero-inflated count models which are commonly used in health economics. This allows for either model averaging or model selection in situations with many potential regressors. The proposed techniques are applied to a data set from Germany considering the demand for health care. A package for the free statistical software environment R is provided.
Disentangling the effects of key innovations on the diversification of Bromelioideae (bromeliaceae).
Resumo:
The evolution of key innovations, novel traits that promote diversification, is often seen as major driver for the unequal distribution of species richness within the tree of life. In this study, we aim to determine the factors underlying the extraordinary radiation of the subfamily Bromelioideae, one of the most diverse clades among the neotropical plant family Bromeliaceae. Based on an extended molecular phylogenetic data set, we examine the effect of two putative key innovations, that is, the Crassulacean acid metabolism (CAM) and the water-impounding tank, on speciation and extinction rates. To this aim, we develop a novel Bayesian implementation of the phylogenetic comparative method, binary state speciation and extinction, which enables hypotheses testing by Bayes factors and accommodates the uncertainty on model selection by Bayesian model averaging. Both CAM and tank habit were found to correlate with increased net diversification, thus fulfilling the criteria for key innovations. Our analyses further revealed that CAM photosynthesis is correlated with a twofold increase in speciation rate, whereas the evolution of the tank had primarily an effect on extinction rates that were found five times lower in tank-forming lineages compared to tank-less clades. These differences are discussed in the light of biogeography, ecology, and past climate change.
Resumo:
This paper uses forecasts from the European Central Bank's Survey of Professional Forecasters to investigate the relationship between inflation and inflation expectations in the euro area. We use theoretical structures based on the New Keynesian and Neoclassical Phillips curves to inform our empirical work. Given the relatively short data span of the Survey of Professional Forecasters and the need to control for many explanatory variables, we use dynamic model averaging in order to ensure a parsimonious econometric speci cation. We use both regression-based and VAR-based methods. We find no support for the backward looking behavior embedded in the Neo-classical Phillips curve. Much more support is found for the forward looking behavior of the New Keynesian Phillips curve, but most of this support is found after the beginning of the financial crisis.
Resumo:
In an effort to meet its obligations under the Kyoto Protocol, in 2005 the European Union introduced a cap-and-trade scheme where mandated installations are allocated permits to emit CO2. Financial markets have developed that allow companies to trade these carbon permits. For the EU to achieve reductions in CO2 emissions at a minimum cost, it is necessary that companies make appropriate investments and policymakers design optimal policies. In an effort to clarify the workings of the carbon market, several recent papers have attempted to statistically model it. However, the European carbon market (EU ETS) has many institutional features that potentially impact on daily carbon prices (and associated nancial futures). As a consequence, the carbon market has properties that are quite different from conventional financial assets traded in mature markets. In this paper, we use dynamic model averaging (DMA) in order to forecast in this newly-developing market. DMA is a recently-developed statistical method which has three advantages over conventional approaches. First, it allows the coefficients on the predictors in a forecasting model to change over time. Second, it allows for the entire fore- casting model to change over time. Third, it surmounts statistical problems which arise from the large number of potential predictors that can explain carbon prices. Our empirical results indicate that there are both important policy and statistical bene ts with our approach. Statistically, we present strong evidence that there is substantial turbulence and change in the EU ETS market, and that DMA can model these features and forecast accurately compared to conventional approaches. From a policy perspective, we discuss the relative and changing role of different price drivers in the EU ETS. Finally, we document the forecast performance of DMA and discuss how this relates to the efficiency and maturity of this market.
Resumo:
We develop methods for Bayesian inference in vector error correction models which are subject to a variety of switches in regime (e.g. Markov switches in regime or structural breaks). An important aspect of our approach is that we allow both the cointegrating vectors and the number of cointegrating relationships to change when the regime changes. We show how Bayesian model averaging or model selection methods can be used to deal with the high-dimensional model space that results. Our methods are used in an empirical study of the Fisher effect.
Resumo:
Block factor methods offer an attractive approach to forecasting with many predictors. These extract the information in these predictors into factors reflecting different blocks of variables (e.g. a price block, a housing block, a financial block, etc.). However, a forecasting model which simply includes all blocks as predictors risks being over-parameterized. Thus, it is desirable to use a methodology which allows for different parsimonious forecasting models to hold at different points in time. In this paper, we use dynamic model averaging and dynamic model selection to achieve this goal. These methods automatically alter the weights attached to different forecasting model as evidence comes in about which has forecast well in the recent past. In an empirical study involving forecasting output and inflation using 139 UK monthly time series variables, we find that the set of predictors changes substantially over time. Furthermore, our results show that dynamic model averaging and model selection can greatly improve forecast performance relative to traditional forecasting methods.
Resumo:
In this paper we develop methods for estimation and forecasting in large timevarying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints with likelihood-based estimation of large systems, we rely on Kalman filter estimation with forgetting factors. We also draw on ideas from the dynamic model averaging literature and extend the TVP-VAR so that its dimension can change over time. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output, and interest rates demonstrates the feasibility and usefulness of our approach.
Resumo:
We develop methods for Bayesian inference in vector error correction models which are subject to a variety of switches in regime (e.g. Markov switches in regime or structural breaks). An important aspect of our approach is that we allow both the cointegrating vectors and the number of cointegrating relationships to change when the regime changes. We show how Bayesian model averaging or model selection methods can be used to deal with the high-dimensional model space that results. Our methods are used in an empirical study of the Fisher e ffect.
Resumo:
The efficient markets hypothesis implies that arbitrage opportunities in markets such as those for foreign exchange (FX) would be, at most, short-lived. The present paper surveys the fragmented nature of FX markets, revealing that information in these markets is also likely to be fragmented. The “quant” workforce in the hedge fund featured in The Fear Index novel by Robert Harris would have little or no reason for their existence in an EMH world. The four currency combinatorial analysis of arbitrage sequences contained in Cross, Kozyakin, O’Callaghan, Pokrovskii and Pokrovskiy (2012) is then considered. Their results suggest that arbitrage processes, rather than being self-extinguishing, tend to be periodic in nature. This helps explain the fact that arbitrage dealing tends to be endemic in FX markets.
Resumo:
The efficient markets hypothesis implies that arbitrage opportunities in markets such as those for foreign exchange (FX) would be, at most, short-lived. The present paper surveys the fragmented nature of FX markets, revealing that information in these markets is also likely to be fragmented. The “quant” workforce in the hedge fund featured in The Fear Index novel by Robert Harris would have little or no reason for their existence in an EMH world. The four currency combinatorial analysis of arbitrage sequences contained in Cross, Kozyakin, O’Callaghan, Pokrovskii and Pokrovskiy (2012) is then considered. Their results suggest that arbitrage processes, rather than being self-extinguishing, tend to be periodic in nature. This helps explain the fact that arbitrage dealing tends to be endemic in FX markets.
Resumo:
This paper discusses the challenges faced by the empirical macroeconomist and methods for surmounting them. These challenges arise due to the fact that macroeconometric models potentially include a large number of variables and allow for time variation in parameters. These considerations lead to models which have a large number of parameters to estimate relative to the number of observations. A wide range of approaches are surveyed which aim to overcome the resulting problems. We stress the related themes of prior shrinkage, model averaging and model selection. Subsequently, we consider a particular modelling approach in detail. This involves the use of dynamic model selection methods with large TVP-VARs. A forecasting exercise involving a large US macroeconomic data set illustrates the practicality and empirical success of our approach.