959 resultados para common factor models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Understanding the dynamics of interest rates and the term structure has important implications for issues as diverse as real economic activity, monetary policy, pricing of interest rate derivative securities and public debt financing. Our paper follows a longstanding tradition of using factor models of interest rates but proposes a semi-parametric procedure to model interest rates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La maladie de Lyme est la maladie vectorielle la plus fréquente dans les pays tempérés et est en émergence dans plusieurs régions du monde. Plusieurs stratégies de prévention existent et comprennent des interventions qui visent les individus, comme le port de vêtements protecteurs, et d’autres qui sont implantées au niveau collectif, dont des interventions de contrôle des tiques dans l’environnement. L’efficacité de ces stratégies peut être influencée par divers facteurs, dont des facteurs sociaux tels que les connaissances, les perceptions et les comportements de la population ciblée. Elles peuvent également avoir des impacts parallèles non désirés, par exemple sur l’environnement et l’économie, et ces derniers peuvent s’opposer aux bénéfices des interventions jusqu’à remettre en cause la pertinence de leur mise en œuvre. Aussi, ces facteurs sociaux et les impacts des interventions sont susceptibles de varier selon la population ciblée et en fonction du contexte épidémiologique et social. L’objectif de cette thèse était donc d’étudier les principaux facteurs sociaux et enjeux d’importance à considérer pour évaluer l’efficacité et prioriser des interventions de prévention pour la maladie de Lyme dans deux populations exposées à des contextes différents, notamment en ce qui concerne leur situation épidémiologique, soient au Québec, où l’incidence de la maladie de Lyme est faible mais en émergence, et en Suisse, où elle est élevée et endémique depuis plus de trois décennies. L’approche choisie et le devis général de l’étude sont basés sur deux modèles théoriques principaux, soient le modèle des croyances relatives à la santé et celui de l’aide à la décision multicritère. Dans un premier temps, les facteurs associés à la perception du risque pour la maladie de Lyme, c’est-à-dire l’évaluation cognitive d’une personne face au risque auquel elle fait face, ont été étudiés. Les résultats suggèrent que les facteurs significatifs sont différents dans les deux régions à l’étude. Ensuite, l’impact des connaissances, de l’exposition, et des perceptions sur l’adoption de comportements préventifs individuels et sur l’acceptabilité des interventions de contrôle des tiques (acaricides, modifications de l’habitat, contrôle des cervidés) a été comparé. Les résultats suggèrent que l’impact des facteurs varierait en fonction du type du comportement et des interventions, mais que la perception de l’efficacité est un facteur commun fortement associé à ces deux aspects, et pourrait être un facteur-clé à cibler lors de campagnes de communication. Les résultats montrent également que les enjeux relatifs aux interventions de contrôle des tiques tels que perçus par la population générale seraient communs dans les deux contextes de l’étude, et partagés par les intervenants impliqués dans la prévention de la maladie de Lyme. Finalement, un modèle d’analyse multicritère a été développé à l’aide d’une approche participative pour le contexte du Québec puis adapté pour le contexte suisse et a permis d’évaluer et de prioriser les interventions préventives selon les différentes perspectives des intervenants. Les rangements produits par les modèles au Québec et en Suisse ont priorisé les interventions qui ciblent principalement les populations humaines, devant les interventions de contrôle des tiques. L’application de l’aide à la décision multicritère dans le contexte de la prévention de la maladie de Lyme a permis de développer un modèle décisionnel polyvalent et adaptable à différents contextes, dont la situation épidémiologique. Ces travaux démontrent que cette approche peut intégrer de façon rigoureuse et transparente les multiples perspectives des intervenants et les enjeux de la prévention relatifs à la santé publique, à la santé animale et environnementale, aux impacts sociaux, ainsi qu’aux considérations économiques, opérationnelles et stratégiques. L’utilisation de ces modèles en santé publique favoriserait l’adoption d’une approche « Une seule santé » pour la prévention de la maladie de Lyme et des zoonoses en général. Mots-clés : maladie de Lyme, prévention, facteurs sociaux, perception du risque, comportements préventifs, acceptabilité, priorisation des interventions, contrôle des tiques, aide à la décision multicritère, analyse multicritère, Québec, Suisse, « Une seule santé »

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Factor forecasting models are shown to deliver real-time gains over autoregressive models for US real activity variables during the recent period, but are less successful for nominal variables. The gains are largely due to the Financial Crisis period, and are primarily at the shortest (one quarter ahead) horizon. Excluding the pre-Great Moderation years from the factor forecasting model estimation period (but not from the data used to extract factors) results in a marked fillip in factor model forecast accuracy, but does the same for the AR model forecasts. The relative performance of the factor models compared to the AR models is largely unaffected by whether the exercise is in real time or is pseudo out-of-sample.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The five-factor ‘Behavioural-Intentions Battery’ was developed by Zeithaml, Berry and Parasuraman (1996), to measure customer behavioural and attitudinal intentions. The structure of this model was re-examined by Bloomer, de Ruyter and Wetzels (1999) across different service industries. They concluded that service loyalty is a multi dimensional construct consisting of four, not five, distinct dimensions. To date, neither model has been tested within a banking environment. This research independently tested the ‘goodness of fit’ of both the four and five-factor models, to data collected from branch bank customers. Data were collected via questionnaire with a sample of 348 banking customers. A confirmatory factor analysis was conducted upon the two opposing factor structures, revealing that the five-factor structure has a superior model fit; however, the fit is ‘marginal’.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The theory of uniqueness has been invoked to explain attitudinal and behavioral nonconformity with respect to peer-group, social-cultural, and statistical norms, as well as the development of a distinctive view of self via seeking novelty goods, adopting new products, acquiring scarce commodities, and amassing material possessions. Present research endeavors in psychology and consumer behavior are inhibited by uncertainty regarding the psychometric properties of the Need for Uniqueness Scale, the primary instrument for measuring individual differences in uniqueness motivation. In an important step toward facilitating research on uniqueness motivation, we used confirmatory factor analysis to evaluate three a priori latent variable models of responses to the Need for Uniqueness Scale. Among the a priori models, an oblique three-factor model best accounted for commonality among items. Exploratory factor analysis followed by estimation of unrestricted three- and four-factor models revealed that a model with a complex pattern of loadings on four modestly correlated factors may best explain the latent structure of the Need for Uniqueness Scale. Additional analyses evaluated the associations among the three a priori factors and an array of individual differences. Results of those analyses indicated the need to distinguish among facets of the uniqueness motive in behavioral research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As is well known, when using an information criterion to select the number of common factors in factor models the appropriate penalty is generally indetermine in the sense that it can be scaled by an arbitrary constant, c say, without affecting consistency. In an influential paper, Hallin and Liška (J Am Stat Assoc102:603–617, 2007) proposes a data-driven procedure for selecting the appropriate value of c. However, by removing one source of indeterminacy, the new procedure simultaneously creates several new ones, which make for rather complicated implementation, a problem that has been largely overlooked in the literature. By providing an extensive analysis using both simulated and real data, the current paper fills this gap.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article proposes a bias-adjusted estimator for use in cointegrated panel regressions when the errors are cross-sectionally correlated through an unknown common factor structure. The asymptotic distribution of the new estimator is derived and is examined in small samples using Monte Carlo simulations. For the estimation of the number of factors, several information-based criteria are considered. The simulation results suggest that the new estimator performs well in comparison to existing ones. In our empirical application, we provide new evidence suggesting that the forward rate unbiasedness hypothesis cannot be rejected. © The Author 2007.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper develops a very simple test for the null hypothesis of no cointegration in panel data. The test is general enough to allow for heteroskedastic and serially correlated errors, unit-specific time trends, cross-sectional dependence and unknown structural breaks in both the intercept and slope of the cointegrated regression, which may be located at different dates for different units. The limiting distribution of the test is derived, and is found to be normal and free of nuisance parameters under the null. A small simulation study is also conducted to investigate the small-sample properties of the test. In our empirical application, we provide new evidence concerning the purchasing power parity hypothesis. © Blackwell Publishing Ltd and the Department of Economics, University of Oxford, 2008.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes new error correction-based cointegration tests for panel data. The limiting distributions of the tests are derived and critical values provided. Our simulation results suggest that the tests have good small-sample properties with small size distortions and high power relative to other popular residual-based panel cointegration tests. In our empirical application, we present evidence suggesting that international healthcare expenditures and GDP are cointegrated once the possibility of an invalid common factor restriction has been accounted for. © 2007 Blackwell Publishing Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este trabalho analisa a importância dos fatores comuns na evolução recente dos preços dos metais no período entre 1995 e 2013. Para isso, estimam-se modelos cointegrados de VAR e também um modelo de fator dinâmico bayesiano. Dado o efeito da financeirização das commodities, DFM pode capturar efeitos dinâmicos comuns a todas as commodities. Além disso, os dados em painel são aplicados para usar toda a heterogeneidade entre as commodities durante o período de análise. Nossos resultados mostram que a taxa de juros, taxa efetiva do dólar americano e também os dados de consumo têm efeito permanente nos preços das commodities. Observa-se ainda a existência de um fator dinâmico comum significativo para a maioria dos preços das commodities metálicas, que tornou-se recentemente mais importante na evolução dos preços das commodities.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O objetivo desta pesquisa consistiu em explorar os fatores comuns das visões de futuro de três segmentos da comunidade paulistana (executivos, empreendedores sociais e pensadores), especialmente no que diz respeito às possíveis alianças cooperativas entre o mundo dos negócios e a sociedade como um todo, como também as estratégias utilizadas para concretizá-las. Indagamos se, com suas experiências de vida, os sujeitos entrevistados protagonizavam suas visões de futuro; quais eram os aspectos em comum dessas visões referentes ao futuro e ao futuro dos negócios; as estratégias utilizadas para concretizar essas visões comuns, percebidas como positivas, e de que maneira podiam contribuir para o desenvolvimento de uma relação cooperativa entre os negócios e a sociedade. Utilizamos 30 entrevistas (10 em cada segmento), em amostra acidental, gravadas e, posteriormente, submetidas a uma análise segundo o referencial da Psicologia Social de Enrique Pichon-Rivière, incluindo alguns dos indicadores do processo interacional (cooperação, comunicação e telecomunicação) e da reação dos entrevistados e entrevistadores com relação aos conteúdos aplicados (transferência e contratransferência). Baseamo-nos no protocolo de Investigação Apreciativa do projeto "Business as an Agent of World Benefit" da Weatherhead School of Management e conceitos convergentes com o referencial adotado no que se refere ao interjogo entre o homem e o mundo, o protagonismo, o contar histórias, o projeto como planejamento de futuro e a criação de novas metáforas. Com relação ao futuro imaginado, encontramos como resultado unânime a preocupação com o meio ambiente, a mudança de valores (com a revisitação da noção de bem-estar, as “mortes subjetivas” por preconceito, o acolhimento expandido aos profissionais da saúde e a saúde como valor); a interconexão (presente no mundo contábil, nos modelos econômicos equitativos, na visão do administrador como estadista, na integração entre o “dentro e fora do negócio”, na consciência da riqueza como medida global e não individual, na ética, no voluntariado por consciência, no cuidado com o ambiente, consigo mesmo, com o outro e com a vida e a morte); coerência, vínculo e escuta (com foco na qualidade das relações e não na tecnologia, no honrar o próximo, no compartilhamento de experiências, na mão dupla entre negócios e comunidade, no bom trato para com as crianças e adultos), inclusão/exclusão (com a criação de espaços públicos intencionalmente inclusivos e a real inclusão dos excluídos na empresa); a educação (através do raciocínio que lide com a linearidade vigente e estimule pensar na complexidade, do reconhecimento de aspectos saudáveis e construtivos no cotidiano, e da formação que abrange gerentes, empreendedores e comunidade, incluindo conhecimento, ética e gratidão); interioridade (alma do negócio, intuição, transcendência como diferencial influindo em uma nova percepção de lucro, sacralidade da vida, encontro consigo próprio); lucro (revisão desse conceito com foco na vida, no bem-estar, no enraizamento das pessoas); consumo/consumidor (com relação à mudança na forma de analisar investimentos inteligentes, uma nova visão de pobreza); longo prazo (ligado à sustentabilidade, à autovalorização das pessoas e à educação dos funcionários). Há muitas estratégias atuantes nos diferentes segmentos, as pensadas são: a intencionalidade de inclusão em espaços públicos por diversos agentes, a revisão do conceito de bem-estar, os benefícios compartilhados, a inclusão mais precoce do jovem no mundo dos negócios e não como forma de exploração, o incentivo às atitudes de liderança nos jovens para o novo mundo e o longo prazo, como tema a ser mais aprofundado. Quanto à relação entre negócios e sociedade parece não haver clareza entre os segmentos quanto ao papel desempenhado pelas empresas, pelas ONGs e pelas comunidades. Surgem pontos como a necessidade da expansão de idéias inovadoras por meio de instituições sem fins lucrativos, do fortalecimento da sociedade civil, de um novo conceito de organização social, das ONGs não serem mais necessárias, das comunidades solidárias como instituições de direito e da ampliação do sentido da responsabilidade social estendido ao ecossistema.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este artigo usa modelos lineares e não lineares de Índice de Difusão para prever, um período à frente, a taxa de crescimento trimestral do PIB agrícola brasileiro. Esses modelos são compostos de fatores comuns que permitem redução significativa do número de variáveis explicativas originais. Os resultados de eficiência preditiva apontam para uma superioridade das previsões geradas pelos modelos de Índice de Difusão sobre os modelos ARMA. Entre os modelos de Índice de Difusão, o modelo não linear com efeito threshold superou os resultados do modelo linear e do modelo AR.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The electromagnetic form factors of the proton are fundamental quantities sensitive to the distribution of charge and magnetization inside the proton. Precise knowledge of the form factors, in particular of the charge and magnetization radii provide strong tests for theory in the non-perturbative regime of QCD. However, the existing data at Q^2 below 1 (GeV/c)^2 are not precise enough for a hard test of theoretical predictions.rnrnFor a more precise determination of the form factors, within this work more than 1400 cross sections of the reaction H(e,e′)p were measured at the Mainz Microtron MAMI using the 3-spectrometer-facility of the A1-collaboration. The data were taken in three periods in the years 2006 and 2007 using beam energies of 180, 315, 450, 585, 720 and 855 MeV. They cover the Q^2 region from 0.004 to 1 (GeV/c)^2 with counting rate uncertainties below 0.2% for most of the data points. The relative luminosity of the measurements was determined using one of the spectrometers as a luminosity monitor. The overlapping acceptances of the measurements maximize the internal redundancy of the data and allow, together with several additions to the standard experimental setup, for tight control of systematic uncertainties.rnTo account for the radiative processes, an event generator was developed and implemented in the simulation package of the analysis software which works without peaking approximation by explicitly calculating the Bethe-Heitler and Born Feynman diagrams for each event.rnTo separate the form factors and to determine the radii, the data were analyzed by fitting a wide selection of form factor models directly to the measured cross sections. These fits also determined the absolute normalization of the different data subsets. The validity of this method was tested with extensive simulations. The results were compared to an extraction via the standard Rosenbluth technique.rnrnThe dip structure in G_E that was seen in the analysis of the previous world data shows up in a modified form. When compared to the standard-dipole form factor as a smooth curve, the extracted G_E exhibits a strong change of the slope around 0.1 (GeV/c)^2, and in the magnetic form factor a dip around 0.2 (GeV/c)^2 is found. This may be taken as indications for a pion cloud. For higher Q^2, the fits yield larger values for G_M than previous measurements, in agreement with form factor ratios from recent precise polarized measurements in the Q2 region up to 0.6 (GeV/c)^2.rnrnThe charge and magnetic rms radii are determined as rn⟨r_e⟩=0.879 ± 0.005(stat.) ± 0.004(syst.) ± 0.002(model) ± 0.004(group) fm,rn⟨r_m⟩=0.777 ± 0.013(stat.) ± 0.009(syst.) ± 0.005(model) ± 0.002(group) fm.rnThis charge radius is significantly larger than theoretical predictions and than the radius of the standard dipole. However, it is in agreement with earlier results measured at the Mainz linear accelerator and with determinations from Hydrogen Lamb shift measurements. The extracted magnetic radius is smaller than previous determinations and than the standard-dipole value.