608 resultados para Modèle OSI


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper derives the ARMA representation of integrated and realized variances when the spot variance depends linearly on two autoregressive factors, i.e., SR SARV(2) models. This class of processes includes affine, GARCH diffusion, CEV models, as well as the eigenfunction stochastic volatility and the positive Ornstein-Uhlenbeck models. We also study the leverage effect case, the relationship between weak GARCH representation of returns and the ARMA representation of realized variances. Finally, various empirical implications of these ARMA representations are considered. We find that it is possible that some parameters of the ARMA representation are negative. Hence, the positiveness of the expected values of integrated or realized variances is not guaranteed. We also find that for some frequencies of observations, the continuous time model parameters may be weakly or not identified through the ARMA representation of realized variances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This note develops general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit the recent asymptotic distributional results in Barndorff-Nielsen and Shephard (2002a), are both easy to implement and highly accurate in empirically realistic situations. On properly accounting for the measurement errors in the volatility forecast evaluations reported in Andersen, Bollerslev, Diebold and Labys (2003), the adjustments result in markedly higher estimates for the true degree of return-volatility predictability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we propose several finite-sample specification tests for multivariate linear regressions (MLR) with applications to asset pricing models. We focus on departures from the assumption of i.i.d. errors assumption, at univariate and multivariate levels, with Gaussian and non-Gaussian (including Student t) errors. The univariate tests studied extend existing exact procedures by allowing for unspecified parameters in the error distributions (e.g., the degrees of freedom in the case of the Student t distribution). The multivariate tests are based on properly standardized multivariate residuals to ensure invariance to MLR coefficients and error covariances. We consider tests for serial correlation, tests for multivariate GARCH and sign-type tests against general dependencies and asymmetries. The procedures proposed provide exact versions of those applied in Shanken (1990) which consist in combining univariate specification tests. Specifically, we combine tests across equations using the MC test procedure to avoid Bonferroni-type bounds. Since non-Gaussian based tests are not pivotal, we apply the “maximized MC” (MMC) test method [Dufour (2002)], where the MC p-value for the tested hypothesis (which depends on nuisance parameters) is maximized (with respect to these nuisance parameters) to control the test’s significance level. The tests proposed are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995. Our empirical results reveal the following. Whereas univariate exact tests indicate significant serial correlation, asymmetries and GARCH in some equations, such effects are much less prevalent once error cross-equation covariances are accounted for. In addition, significant departures from the i.i.d. hypothesis are less evident once we allow for non-Gaussian errors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the problem of testing the error distribution in a multivariate linear regression (MLR) model. The tests are functions of appropriately standardized multivariate least squares residuals whose distribution is invariant to the unknown cross-equation error covariance matrix. Empirical multivariate skewness and kurtosis criteria are then compared to simulation-based estimate of their expected value under the hypothesized distribution. Special cases considered include testing multivariate normal, Student t; normal mixtures and stable error models. In the Gaussian case, finite-sample versions of the standard multivariate skewness and kurtosis tests are derived. To do this, we exploit simple, double and multi-stage Monte Carlo test methods. For non-Gaussian distribution families involving nuisance parameters, confidence sets are derived for the the nuisance parameters and the error distribution. The procedures considered are evaluated in a small simulation experi-ment. Finally, the tests are applied to an asset pricing model with observable risk-free rates, using monthly returns on New York Stock Exchange (NYSE) portfolios over five-year subperiods from 1926-1995.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose methods for testing hypotheses of non-causality at various horizons, as defined in Dufour and Renault (1998, Econometrica). We study in detail the case of VAR models and we propose linear methods based on running vector autoregressions at different horizons. While the hypotheses considered are nonlinear, the proposed methods only require linear regression techniques as well as standard Gaussian asymptotic distributional theory. Bootstrap procedures are also considered. For the case of integrated processes, we propose extended regression methods that avoid nonstandard asymptotics. The methods are applied to a VAR model of the U.S. economy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies the transition between exchange rate regimes using a Markov chain model with time-varying transition probabilities. The probabilities are parameterized as nonlinear functions of variables suggested by the currency crisis and optimal currency area literature. Results using annual data indicate that inflation, and to a lesser extent, output growth and trade openness help explain the exchange rate regime transition dynamics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper employs the one-sector Real Business Cycle model as a testing ground for four different procedures to estimate Dynamic Stochastic General Equilibrium (DSGE) models. The procedures are: 1 ) Maximum Likelihood, with and without measurement errors and incorporating Bayesian priors, 2) Generalized Method of Moments, 3) Simulated Method of Moments, and 4) Indirect Inference. Monte Carlo analysis indicates that all procedures deliver reasonably good estimates under the null hypothesis. However, there are substantial differences in statistical and computational efficiency in the small samples currently available to estimate DSGE models. GMM and SMM appear to be more robust to misspecification than the alternative procedures. The implications of the stochastic singularity of DSGE models for each estimation method are fully discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decade, the potential macroeconomic effects of intermittent large adjustments in microeconomic decision variables such as prices, investment, consumption of durables or employment – a behavior which may be justified by the presence of kinked adjustment costs – have been studied in models where economic agents continuously observe the optimal level of their decision variable. In this paper, we develop a simple model which introduces infrequent information in a kinked adjustment cost model by assuming that agents do not observe continuously the frictionless optimal level of the control variable. Periodic releases of macroeconomic statistics or dividend announcements are examples of such infrequent information arrivals. We first solve for the optimal individual decision rule, that is found to be both state and time dependent. We then develop an aggregation framework to study the macroeconomic implications of such optimal individual decision rules. Our model has the distinct characteristic that a vast number of agents tend to act together, and more so when uncertainty is large. The average effect of an aggregate shock is inversely related to its size and to aggregate uncertainty. We show that these results differ substantially from the ones obtained with full information adjustment cost models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines the use of bundling by a firm that sells in two national markets and faces entry by parallel traders. The firm can bundle its main product, - a tradable good- with a non-traded service. It chooses between the strategies of pure bundling, mixed bundling and no bundling. The paper shows that in the low-price country the threat of grey trade elicits a move from mixed bundling, or no bundling, towards pure bundling. It encourages a move from pure bundling towards mixes bundling or no bundling in the high-price country. The set of parameter values for which the profit maximizing strategy is not to supply the low price country is smaller than in the absence of bundling. The welfare effects of deterrence of grey trade are not those found in conventional models of price arbitrage. Some consumers in the low-price country may gain from the threat of entry by parallel traders although they pay a higher price. This is due to the fact that the firm responds to the threat of arbitrageurs by increasing the amount of services it puts in the bundle targeted at consumers in that country. Similarly, the threat of parallel trade may affect some consumers in the hight-price country adversely.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Un atout majeur des organisations consiste en leur capacité à créer et exploiter l’information et les connaissances, capacité déterminée entre autres par les comportements informationnels. Chargés de décisions stratégiques, tactiques et opérationnelles, les cadres intermédiaires sont au cœur du processus de création des connaissances, et leurs comportements informationnels doivent être soutenus par des systèmes d’information. Toutefois, leurs comportements informationnels sont peu documentés. La présente recherche porte sur la modélisation des comportements informationnels de cadres intermédiaires d’une organisation municipale. Plus spécifiquement, elle examine comment ces cadres répondent à leurs besoins d’information courante dans le contexte de leurs activités de gestion, c’est-à-dire dans leur environnement d’utilisation d’information. L’étude répond aux questions de recherche suivantes : (1) Quelles sont les situations problématiques auxquelles font face les cadres intermédiaires municipaux ? (2) Quels sont les besoins informationnels exprimés par les cadres intermédiaires municipaux lors de situations problématiques ? (3) Quelles sont les sources d’information qui soutiennent les comportements informationnels des cadres intermédiaires municipaux ? Cette recherche descriptive s’inscrit dans une approche qualitative. Les 21 cadres intermédiaires ayant participé à l’étude proviennent de deux arrondissements d’une municipalité québécoise fusionnée en 2002. Les modes de collecte de données sont l’entrevue en profondeur en personne et l’observation directe auprès de ces cadres, et la collecte de documentation pertinente. L’incident critique est utilisé comme technique de collecte de données et comme unité d’analyse. Les données recueillies font l’objet d’une analyse de contenu qualitative basée sur la théorisation ancrée. Les résultats indiquent que les rôles de gestion proposés dans les écrits pour les cadres supérieurs s’appliquent aussi aux cadres intermédiaires, bien que le rôle conseil ressorte comme étant particulier à ces derniers. Ceux-ci ont des responsabilités de gestion aux trois niveaux d’intervention opérationnel, tactique et stratégique, bien qu’ils œuvrent davantage au plan tactique. Les situations problématiques dont ils sont chargés s’inscrivent dans l’environnement d’utilisation d’information constitué des composantes suivantes : leurs rôles et responsabilités de gestion et le contexte organisationnel propre à une municipalité en transformation. Les cadres intermédiaires ont eu à traiter davantage de situations nouvelles que récurrentes, caractérisées par des sujets portant principalement sur les ressources matérielles et immobilières ou sur des aspects d’intérêt juridique, réglementaire et normatif. Ils ont surtout manifesté des besoins pour de l’information de nature processuelle et contextuelle. Pour y répondre, ils ont consulté davantage de sources verbales que documentaires, même si le nombre de ces dernières reste élevé, et ont préféré utiliser des sources d’information internes. Au plan théorique, le modèle de comportement informationnel proposé pour les cadres intermédiaires municipaux enrichit les principales composantes du modèle général d’utilisation de l’information (Choo, 1998) et du modèle d’environnement d’utilisation d’information (Taylor, 1986, 1991). L’étude permet aussi de préciser les concepts d’« utilisateur » et d’« utilisation de l’information ». Au plan pratique, la recherche permet d’aider à la conception de systèmes de repérage d’information adaptés aux besoins des cadres intermédiaires municipaux, et aide à évaluer l’apport des systèmes d’information archivistiques à la gestion de la mémoire organisationnelle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conseil canadien de la magistrature

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ce rapport de recherche porte sur une étude s’intéressant au transfert des connaissances tacites chez les gestionnaires, c’est-à-dire le partage de ces connaissances et leur utilisation informelle, durant une situation de coordination dans un service municipal. La thèse est articulée autour des questions suivantes : Quelles sont les situations de coordination vécues par les gestionnaires municipaux? Quelles sont les sources de connaissances tacites partagées et utilisées? Quelles sont les relations de connaissances mobilisées de façon informelle lors du transfert des connaissances tacites? Quels sont les facteurs encourageant ou inhibant le transfert informel des connaissances tacites? À partir d’un modèle basé sur une approche situationnelle (Taylor, 1989 et 1991), nous avons revu la documentation touchant nos questions de recherche. Nous avons défini notamment la récursivité des connaissances et le réseau de connaissances, de même que présenté le modèle de la conversion des connaissances (Nonaka, 1994) et celui de l’actualisation de soi (St-Arnaud, 1996). Nous avons questionné 22 répondants à l’aide d’instruments de mesure qui combinent les techniques de l’incident critique, de l’entrevue cognitive et réflexive, le questionnement sur les réseaux organisationnels et l’observation participante. Tels des filets, ces instruments ont permis de traquer et d’obtenir des données d’une grande richesse sur les connaissances tacites et les comportements informels durant le transfert de connaissances en situation de coordination. Ces données ont été analysées selon une approche méthodologique essentiellement qualitative combinant l’analyse de contenu, la schématisation heuristique et l’analyse des réseaux sociaux. Nos résultats montrent que la complexité d’une situation de coordination conditionne le choix des mécanismes de coordination. De plus, les sources de connaissances sont, du point de vue individuel, le gestionnaire et ses artefacts, de même que son réseau personnel avec ses propres artefacts. Du point de vue collectif, ces sources sont réifiées dans le réseau de connaissances. Les connaissances clés d’une situation de coordination sont celles sur le réseau organisationnel, le contexte, les expériences en gestion et en situation complexe de coordination, la capacité de communiquer, de négocier, d’innover et celle d’attirer l’attention. Individuellement, les gestionnaires privilégient l’actualisation de soi, l’autoformation et la formation contextualisée et, collectivement, la coprésence dans l’action, le réseautage et l’accompagnement. Cette étude fournit un modèle valide du transfert contextualisé des connaissances qui est un cas de coordination complexe d’activités en gestion des connaissances. Ce transfert est concomitant à d’autres situations de coordination. La nature tacite des connaissances prévaut, de même que le mode informel, les médias personnels et les mécanismes d’ajustement mutuel. Les connaissances tacites sont principalement transférées au début des processus de gestion de projet et continuellement durant la rétroaction et le suivi des résultats. Quant aux connaissances explicites, les gestionnaires les utilisent principalement comme un symbole à la fin des processus de gestion de projet. Parmi les personnes et les groupes de personnes d’une situation de transfert contextualisé des connaissances, 10 % jouent des rôles clés, soit ceux d’experts et d’intermédiaires de personnes et d’artefacts. Les personnes en périphérie possèdent un potentiel de structuration, c’est-à-dire de connexité, pour assurer la continuité du réseau de connaissances organisationnel. Notre étude a élargi le modèle général de la complexité d’une situation (Bystrom, 1999; Choo, 2006; Taylor, 1986 et 1991), la théorie de la coordination (Malone et Crowston, 1994), le modèle de la conversion des connaissances (Nonaka, 1994), celui de l’actualisation de soi (St-Arnaud, 1996) et la théorie des réseaux de connaissances (Monge et Contractor, 2003). Notre modèle réaffirme la concomitance de ces modèles généraux selon une approche constructiviste (Giddens, 1987) où la dualité du structurel et la compétence des acteurs sont confirmées et enrichies.