312 resultados para Recherche à voisinage variable
Resumo:
Several Authors Have Discussed Recently the Limited Dependent Variable Regression Model with Serial Correlation Between Residuals. the Pseudo-Maximum Likelihood Estimators Obtained by Ignoring Serial Correlation Altogether, Have Been Shown to Be Consistent. We Present Alternative Pseudo-Maximum Likelihood Estimators Which Are Obtained by Ignoring Serial Correlation Only Selectively. Monte Carlo Experiments on a Model with First Order Serial Correlation Suggest That Our Alternative Estimators Have Substantially Lower Mean-Squared Errors in Medium Size and Small Samples, Especially When the Serial Correlation Coefficient Is High. the Same Experiments Also Suggest That the True Level of the Confidence Intervals Established with Our Estimators by Assuming Asymptotic Normality, Is Somewhat Lower Than the Intended Level. Although the Paper Focuses on Models with Only First Order Serial Correlation, the Generalization of the Proposed Approach to Serial Correlation of Higher Order Is Also Discussed Briefly.
Resumo:
This Paper Studies Tests of Joint Hypotheses in Time Series Regression with a Unit Root in Which Weakly Dependent and Heterogeneously Distributed Innovations Are Allowed. We Consider Two Types of Regression: One with a Constant and Lagged Dependent Variable, and the Other with a Trend Added. the Statistics Studied Are the Regression \"F-Test\" Originally Analysed by Dickey and Fuller (1981) in a Less General Framework. the Limiting Distributions Are Found Using Functinal Central Limit Theory. New Test Statistics Are Proposed Which Require Only Already Tabulated Critical Values But Which Are Valid in a Quite General Framework (Including Finite Order Arma Models Generated by Gaussian Errors). This Study Extends the Results on Single Coefficients Derived in Phillips (1986A) and Phillips and Perron (1986).
Resumo:
It Has Often Been Assumed That a Country's Tax Level, Tax Structure Progressivity and After-Tax Income Distribution Are Chosen by Voters Subject Only to Their Budget Constraints. This Paper Argues That At Certain Income Levels Voters' Decisions May Be Constrained by Bureaucratic Corruption. the Theoretical Arguments Are Developed in Asymmetry Limits the Capacity of the Fiscal System to Generate Revenues by Means of Direct Taxes. This Hypothesis Is Tested Witha Sample of International Data by Means of a Simultaneous Equation Model. the Distortions Resulting From Corruption Ar Captured Through Their Effects on a Latent Variable Defined As the Overall Fiscal Structure. Evidence Is Found of Causality Running From This Latent Variable to the Level of Taxes and the Degree of After Tax Inequality.
Resumo:
Previous studies on the determinants of the choice of college major have assumed a constant probability of success across majors or a constant earnings stream across majors. Our model disregards these two restrictive assumptions in computing an expected earnings variable to explain the probability that a student will choose a specific major among four choices of concentrations. The construction of an expected earnings variable requires information on the student s perceived probability of success, the predicted earnings of graduates in all majors and the student s expected earnings if he (she) fails to complete a college program. Using data from the National Longitudinal Survey of Youth, we evaluate the chances of success in all majors for all the individuals in the sample. Second, the individuals' predicted earnings of graduates in all majors are obtained using Rumberger and Thomas's (1993) regression estimates from a 1987 Survey of Recent College Graduates. Third, we obtain idiosyncratic estimates of earnings alternative of not attending college or by dropping out with a condition derived from our college major decision-making model applied to our sample of college students. Finally, with a mixed multinominal logit model, we explain the individuals' choice of a major. The results of the paper show that the expected earnings variable is essential in the choice of a college major. There are, however, significant differences in the impact of expected earnings by gender and race.
Resumo:
This paper addresses the issue of estimating semiparametric time series models specified by their conditional mean and conditional variance. We stress the importance of using joint restrictions on the mean and variance. This leads us to take into account the covariance between the mean and the variance and the variance of the variance, that is, the skewness and kurtosis. We establish the direct links between the usual parametric estimation methods, namely, the QMLE, the GMM and the M-estimation. The ususal univariate QMLE is, under non-normality, less efficient than the optimal GMM estimator. However, the bivariate QMLE based on the dependent variable and its square is as efficient as the optimal GMM one. A Monte Carlo analysis confirms the relevance of our approach, in particular, the importance of skewness.
Resumo:
Recent work suggests that the conditional variance of financial returns may exhibit sudden jumps. This paper extends a non-parametric procedure to detect discontinuities in otherwise continuous functions of a random variable developed by Delgado and Hidalgo (1996) to higher conditional moments, in particular the conditional variance. Simulation results show that the procedure provides reasonable estimates of the number and location of jumps. This procedure detects several jumps in the conditional variance of daily returns on the S&P 500 index.
Resumo:
In this paper, we consider testing marginal normal distributional assumptions. More precisely, we propose tests based on moment conditions implied by normality. These moment conditions are known as the Stein (1972) equations. They coincide with the first class of moment conditions derived by Hansen and Scheinkman (1995) when the random variable of interest is a scalar diffusion. Among other examples, Stein equation implies that the mean of Hermite polynomials is zero. The GMM approach we adopted is well suited for two reasons. It allows us to study in detail the parameter uncertainty problem, i.e., when the tests depend on unknown parameters that have to be estimated. In particular, we characterize the moment conditions that are robust against parameter uncertainty and show that Hermite polynomials are special examples. This is the main contribution of the paper. The second reason for using GMM is that our tests are also valid for time series. In this case, we adopt a Heteroskedastic-Autocorrelation-Consistent approach to estimate the weighting matrix when the dependence of the data is unspecified. We also make a theoretical comparison of our tests with Jarque and Bera (1980) and OPG regression tests of Davidson and MacKinnon (1993). Finite sample properties of our tests are derived through a comprehensive Monte Carlo study. Finally, three applications to GARCH and realized volatility models are presented.
Resumo:
Public policies often involve choices of alternatives in which the size and the composition of the population may vary. Examples are the allocation of resources to prenatal care and the design of aid packages to developing countries. In order to assess the corresponding feasible choices on normative grounds, criteria for social evaluation that are capable of performing variable-population comparisons are required. We review several important axioms for welfarist population principles and discuss the link between individual well-being and the desirability of adding a new person to a given society.
Resumo:
It is well known that standard asymptotic theory is not valid or is extremely unreliable in models with identification problems or weak instruments [Dufour (1997, Econometrica), Staiger and Stock (1997, Econometrica), Wang and Zivot (1998, Econometrica), Stock and Wright (2000, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. One possible way out consists here in using a variant of the Anderson-Rubin (1949, Ann. Math. Stat.) procedure. The latter, however, allows one to build exact tests and confidence sets only for the full vector of the coefficients of the endogenous explanatory variables in a structural equation, which in general does not allow for individual coefficients. This problem may in principle be overcome by using projection techniques [Dufour (1997, Econometrica), Dufour and Jasiak (2001, International Economic Review)]. AR-types are emphasized because they are robust to both weak instruments and instrument exclusion. However, these techniques can be implemented only by using costly numerical techniques. In this paper, we provide a complete analytic solution to the problem of building projection-based confidence sets from Anderson-Rubin-type confidence sets. The latter involves the geometric properties of “quadrics” and can be viewed as an extension of usual confidence intervals and ellipsoids. Only least squares techniques are required for building the confidence intervals. We also study by simulation how “conservative” projection-based confidence sets are. Finally, we illustrate the methods proposed by applying them to three different examples: the relationship between trade and growth in a cross-section of countries, returns to education, and a study of production functions in the U.S. economy.
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.
Resumo:
In the last decade, the potential macroeconomic effects of intermittent large adjustments in microeconomic decision variables such as prices, investment, consumption of durables or employment – a behavior which may be justified by the presence of kinked adjustment costs – have been studied in models where economic agents continuously observe the optimal level of their decision variable. In this paper, we develop a simple model which introduces infrequent information in a kinked adjustment cost model by assuming that agents do not observe continuously the frictionless optimal level of the control variable. Periodic releases of macroeconomic statistics or dividend announcements are examples of such infrequent information arrivals. We first solve for the optimal individual decision rule, that is found to be both state and time dependent. We then develop an aggregation framework to study the macroeconomic implications of such optimal individual decision rules. Our model has the distinct characteristic that a vast number of agents tend to act together, and more so when uncertainty is large. The average effect of an aggregate shock is inversely related to its size and to aggregate uncertainty. We show that these results differ substantially from the ones obtained with full information adjustment cost models.
Resumo:
McCausland (2004a) describes a new theory of random consumer demand. Theoretically consistent random demand can be represented by a \"regular\" \"L-utility\" function on the consumption set X. The present paper is about Bayesian inference for regular L-utility functions. We express prior and posterior uncertainty in terms of distributions over the indefinite-dimensional parameter set of a flexible functional form. We propose a class of proper priors on the parameter set. The priors are flexible, in the sense that they put positive probability in the neighborhood of any L-utility function that is regular on a large subset bar(X) of X; and regular, in the sense that they assign zero probability to the set of L-utility functions that are irregular on bar(X). We propose methods of Bayesian inference for an environment with indivisible goods, leaving the more difficult case of indefinitely divisible goods for another paper. We analyse individual choice data from a consumer experiment described in Harbaugh et al. (2001).
Resumo:
In the past quarter century, there has been a dramatic shift of focus in social choice theory, with structured sets of alternatives and restricted domains of the sort encountered in economic problems coming to the fore. This article provides an overview of some of the recent contributions to four topics in normative social choice theory in which economic modelling has played a prominent role: Arrovian social choice theory on economic domains, variable-population social choice, strategy-proof social choice, and axiomatic models of resource allocation.
Resumo:
This paper develops a model where the value of the monetary policy instrument is selected by a heterogenous committee engaged in a dynamic voting game. Committee members differ in their institutional power and, in certain states of nature, they also differ in their preferred instrument value. Preference heterogeneity and concern for the future interact to generate decisions that are dynamically ineffcient and inertial around the previously-agreed instrument value. This model endogenously generates autocorrelation in the policy variable and provides an explanation for the empirical observation that the nominal interest rate under the central bank’s control is infrequently adjusted.
Resumo:
Consistency, a natural weakening of transitivity introduced in a seminal contribution by Suzumura (1976b), has turned out to be an interesting and promising concept in a variety of areas within economic theory. This paper summarizes its recent applications and provides some new observations in welfarist social choice and in population ethics. In particular, it is shown that the conclusion of the welfarism theorem remains true if transitivity is replaced by consistency and that an impossibility result in variable-population social-choice theory turns into a possibility if transitivity is weakened to consistency.