676 resultados para Processus de branchement Théorème de Kesten-Stigum
Resumo:
Le financement des municipalités québécoises repose en majeure partie sur des revenus autonomes, dont la principale source découle de leur pouvoir de taxer la richesse foncière. Par conséquent, le législateur, voulant assurer la stabilité financière des municipalités, a strictement encadré le processus de confection et de révision des évaluations foncières par plusieurs lois et règlements. Ceci n’a tout de même pas empêché l’augmentation des demandes de contestations à chaque nouveau rôle. Débutant par une demande de révision administrative, à l’aide d’un simple formulaire, le litige entre la municipalité et le contribuable peut se poursuivre devant le Tribunal administratif du Québec et même la Cour du Québec, la Cour supérieure et la Cour d’appel, où la procédure devient de plus en plus exigeante. La transition du processus administratif à judiciaire crée parfois une certaine friction au sein de la jurisprudence, notamment au niveau de la déférence à accorder à l’instance spécialisée, ou encore à l’égard de la souplesse des règles de preuve applicables devant cette dernière. Par une étude positiviste du droit, nous analysons tout d’abord la procédure de confection du rôle foncier, en exposant les acteurs et leurs responsabilités, ainsi que les concepts fondamentaux dans l’établissement de la valeur réelle des immeubles. Ensuite, nous retraçons chacune des étapes de la contestation d’une inscription au rôle, en y recensant les diverses règles de compétence, de preuve et de procédure applicables à chaque instance. À l’aide de nombreux exemples jurisprudentiels, nous tentons de mettre en lumière les différentes interprétations que font les tribunaux de la Loi sur la fiscalité municipale et autres législations connexes.
Resumo:
L’objectif de cette étude qualitative est de décrire et de comprendre le processus décisionnel sous-jacent à la rétroaction corrective d’un enseignant de langue seconde à l’oral. Pour ce faire, elle décrit les principaux facteurs qui influencent la décision de procéder à une rétroaction corrective ainsi que ceux qui sous-tendent le choix d’une technique de rétroaction particulière. Trois enseignantes de français langue seconde auprès d’un public d’adultes immigrants au Canada ont participé à cette recherche. Des séquences complètes d’enseignement ont été filmées puis présentées aux participantes qui ont commenté leur pratique. L’entretien de verbalisation s’est effectué sous la forme d’un rappel stimulé et d’une entrevue. Cet entretien constitue les données de cette étude. Les résultats ont révélé que la rétroaction corrective ainsi que le choix de la technique employée étaient influencés par des facteurs relatifs à l’erreur, à l’apprenant, au curriculum, à l’enseignant et aux caractéristiques des techniques. Ils ont également révélé que l’apprenant est au cœur du processus décisionnel rétroactif des enseignants de langue seconde. En effet, les participantes ont affirmé vouloir s’adapter à son fonctionnement cognitif, à son état affectif, à son niveau de langue et à la récurrence de ses erreurs. L’objectif de cette étude est d’enrichir le domaine de la formation initiale et continue des enseignants de L2. Pour cela, des implications pédagogiques ont été envisagées et la recommandation a été faite de porter à la connaissance des enseignants de L2 les résultats des recherches sur l’efficacité des techniques de rétroaction corrective, particulièrement celles qui prennent en compte les caractéristiques des apprenants.
Resumo:
Cet article a pour but de démystifier la notion de OpenURL, un terme de plus en plus présent dans le domaine de l'information, à l'aide de concepts théoriques mais surtout grâce à une description pratique de sa mise en place et de sa capacité d'action. Une courte introduction sur le concept d'arrimage dans les ressources documentaires donnera le ton pour un bref survol de la naissance du concept OpenURL suivi d'une explication concrète de son fonctionnement lorsque utilisé dans le processus de recherche documentaire. Pour les usagers, la force de cette technologie réside dans le fait qu'elle permet, lorsque possible, un accès direct et transparent aux ressources électroniques tout en offrant des options supplémentaires des plus pertinentes. De façon pragmatique, l'usager pourra dorénavant chercher une référence bibliographique dans une base de données et accéder directement au plein texte de l'article repéré si son institution y est abonné ou sinon, être redirigé ailleurs, vers le formulaire institutionnel de prêt en bibliothèques entre autres. On constatera que cette toute nouvelle norme, qui n'en est qu'à ses débuts, offre des avantages certains dans le milieu des bibliothèques de recherche.
Resumo:
Traduit de l'anglais par Jimmy Légaré et Olivier Paradis (Direction des bibliothèques de l'UdeM).
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.
Resumo:
In this paper, we characterize the asymmetries of the smile through multiple leverage effects in a stochastic dynamic asset pricing framework. The dependence between price movements and future volatility is introduced through a set of latent state variables. These latent variables can capture not only the volatility risk and the interest rate risk which potentially affect option prices, but also any kind of correlation risk and jump risk. The standard financial leverage effect is produced by a cross-correlation effect between the state variables which enter into the stochastic volatility process of the stock price and the stock price process itself. However, we provide a more general framework where asymmetric implied volatility curves result from any source of instantaneous correlation between the state variables and either the return on the stock or the stochastic discount factor. In order to draw the shapes of the implied volatility curves generated by a model with latent variables, we specify an equilibrium-based stochastic discount factor with time non-separable preferences. When we calibrate this model to empirically reasonable values of the parameters, we are able to reproduce the various types of implied volatility curves inferred from option market data.
Resumo:
This paper assesses the empirical performance of an intertemporal option pricing model with latent variables which generalizes the Hull-White stochastic volatility formula. Using this generalized formula in an ad-hoc fashion to extract two implicit parameters and forecast next day S&P 500 option prices, we obtain similar pricing errors than with implied volatility alone as in the Hull-White case. When we specialize this model to an equilibrium recursive utility model, we show through simulations that option prices are more informative than stock prices about the structural parameters of the model. We also show that a simple method of moments with a panel of option prices provides good estimates of the parameters of the model. This lays the ground for an empirical assessment of this equilibrium model with S&P 500 option prices in terms of pricing errors.
Resumo:
This paper proves a new representation theorem for domains with both discrete and continuous variables. The result generalizes Debreu's well-known representation theorem on connected domains. A strengthening of the standard continuity axiom is used in order to guarantee the existence of a representation. A generalization of the main theorem and an application of the more general result are also presented.
Resumo:
Presently, conditions ensuring the validity of bootstrap methods for the sample mean of (possibly heterogeneous) near epoch dependent (NED) functions of mixing processes are unknown. Here we establish the validity of the bootstrap in this context, extending the applicability of bootstrap methods to a class of processes broadly relevant for applications in economics and finance. Our results apply to two block bootstrap methods: the moving blocks bootstrap of Künsch ( 989) and Liu and Singh ( 992), and the stationary bootstrap of Politis and Romano ( 994). In particular, the consistency of the bootstrap variance estimator for the sample mean is shown to be robust against heteroskedasticity and dependence of unknown form. The first order asymptotic validity of the bootstrap approximation to the actual distribution of the sample mean is also established in this heterogeneous NED context.
Resumo:
This paper proposes an explanation for why efficient reforms are not carried out when losers have the power to block their implementation, even though compensating them is feasible. We construct a signaling model with two-sided incomplete information in which a government faces the task of sequentially implementing two reforms by bargaining with interest groups. The organization of interest groups is endogenous. Compensations are distortionary and government types differ in the concern about distortions. We show that, when compensations are allowed to be informative about the government’s type, there is a bias against the payment of compensations and the implementation of reforms. This is because paying high compensations today provides incentives for some interest groups to organize and oppose subsequent reforms with the only purpose of receiving a transfer. By paying lower compensations, governments attempt to prevent such interest groups from organizing. However, this comes at the cost of reforms being blocked by interest groups with relatively high losses.
Resumo:
In this paper, we introduce a new approach for volatility modeling in discrete and continuous time. We follow the stochastic volatility literature by assuming that the variance is a function of a state variable. However, instead of assuming that the loading function is ad hoc (e.g., exponential or affine), we assume that it is a linear combination of the eigenfunctions of the conditional expectation (resp. infinitesimal generator) operator associated to the state variable in discrete (resp. continuous) time. Special examples are the popular log-normal and square-root models where the eigenfunctions are the Hermite and Laguerre polynomials respectively. The eigenfunction approach has at least six advantages: i) it is general since any square integrable function may be written as a linear combination of the eigenfunctions; ii) the orthogonality of the eigenfunctions leads to the traditional interpretations of the linear principal components analysis; iii) the implied dynamics of the variance and squared return processes are ARMA and, hence, simple for forecasting and inference purposes; (iv) more importantly, this generates fat tails for the variance and returns processes; v) in contrast to popular models, the variance of the variance is a flexible function of the variance; vi) these models are closed under temporal aggregation.
Resumo:
Suzumura shows that a binary relation has a weak order extension if and only if it is consistent. However, consistency is demonstrably not sufficient to extend an upper semi-continuous binary relation to an upper semicontinuous weak order. Jaffray proves that any asymmetric (or reflexive), transitive and upper semicontinuous binary relation has an upper semicontinuous strict (or weak) order extension. We provide sufficient conditions for existence of upper semicontinuous extensions of consistence rather than transitive relations. For asymmetric relations, consistency and upper semicontinuity suffice. For more general relations, we prove one theorem using a further consistency property and another with an additional continuity requirement.
Resumo:
In spatial environments, we consider social welfare functions satisfying Arrow's requirements. i.e., weak Pareto and independence of irrelevant alternatives. When the policy space os a one-dimensional continuum, such a welfare function is determined by a collection of 2n strictly quasi-concave preferences and a tie-breaking rule. As a corrollary, we obtain that when the number of voters is odd, simple majority voting is transitive if and only if each voter's preference is strictly quasi-concave. When the policy space is multi-dimensional, we establish Arrow's impossibility theorem. Among others, we show that weak Pareto, independence of irrelevant alternatives, and non-dictatorship are inconsistent if the set of alternatives has a non-empty interior and it is compact and convex.
Resumo:
A desirable property of a voting procedure is that it be immune to the strategic withdrawal of a candidate for election. Dutta, Jackson, and Le Breton (Econometrica, 2001) have established a number of theorems that demonstrate that this condition is incompatible with some other desirable properties of voting procedures. This article shows that Grether and Plott's nonbinary generalization of Arrow's Theorem can be used to provide simple proofs of two of these impossibility theorems.