196 resultados para [JEL:C30] Mathematical and Quantitative Methods - Econometric Methods: Multiple
Resumo:
We compare two methods for visualising contingency tables and developa method called the ratio map which combines the good properties of both.The first is a biplot based on the logratio approach to compositional dataanalysis. This approach is founded on the principle of subcompositionalcoherence, which assures that results are invariant to considering subsetsof the composition. The second approach, correspondence analysis, isbased on the chi-square approach to contingency table analysis. Acornerstone of correspondence analysis is the principle of distributionalequivalence, which assures invariance in the results when rows or columnswith identical conditional proportions are merged. Both methods may bedescribed as singular value decompositions of appropriately transformedmatrices. Correspondence analysis includes a weighting of the rows andcolumns proportional to the margins of the table. If this idea of row andcolumn weights is introduced into the logratio biplot, we obtain a methodwhich obeys both principles of subcompositional coherence and distributionalequivalence.
Resumo:
Theorem 1 of Euler s paper of 1737 'Variae Observationes Circa Series Infinitas', states the astonishing result that the series of all unit fractions whose denominators are perfect powers of integers minus unity has sum one. Euler attributes the Theorem to Goldbach. The proof is one of those examples of misuse of divergent series to obtain correct results so frequent during the seventeenth and eighteenth centuries. We examine this proof closelyand, with the help of some insight provided by a modern (and completely dierent) proof of the Goldbach-Euler Theorem, we present a rational reconstruction in terms which could be considered rigorous by modern Weierstrassian standards. At the same time, with a few ideas borrowed from nonstandard analysis we see how the same reconstruction can be also be considered rigorous by modern Robinsonian standards. This last approach, though, is completely in tune with Goldbach and Euler s proof. We hope to convince the reader then how, a few simple ideas from nonstandard analysis, vindicate Euler's work.
Resumo:
This work presents an application of the multilevel analysis techniques tothe study of the abstention in the 2000 Spanish general election. Theinterest of the study is both, substantive and methodological. From thesubstantive point of view the article intends to explain the causes ofabstention and analyze the impact of associationism on it. From themethodological point of view it is intended to analyze the interaction betweenindividual and context with a modelisation that takes into account thehierarchical structure of data. The multilevel study of this paper validatesthe one level results obtained in previous analysis of the abstention andshows that only a fraction of the differences in abstention are explained bythe individual characteristics of the electors. Another important fraction ofthese differences is due to the political and social characteristics of thecontext. Relating to associationism, the data suggest that individualparticipation in associations decrease the probability of abstention. However,better indicators are needed in order to catch more properly the effect ofassociationism in electoral behaviour.
Resumo:
Condence intervals in econometric time series regressions suffer fromnotorious coverage problems. This is especially true when the dependencein the data is noticeable and sample sizes are small to moderate, as isoften the case in empirical studies. This paper suggests using thestudentized block bootstrap and discusses practical issues, such as thechoice of the block size. A particular data-dependent method is proposedto automate the method. As a side note, it is pointed out that symmetricconfidence intervals are preferred over equal-tailed ones, since theyexhibit improved coverage accuracy. The improvements in small sampleperformance are supported by a simulation study.
Resumo:
Unemployment rates in developed countries have recently reached levels not seenin a generation, and workers of all ages are facing increasing probabilities of losingtheir jobs and considerable losses in accumulated assets. These events likely increasethe reliance that most older workers will have on public social insurance programs,exactly at a time that public finances are suffering from a large drop in contributions.Our paper explicitly accounts for employment uncertainty and unexpectedwealth shocks, something that has been relatively overlooked in the literature, butthat has grown in importance in recent years. Using administrative and householdlevel data we empirically characterize a life-cycle model of retirement and claimingdecisions in terms of the employment, wage, health, and mortality uncertainty facedby individuals. Our benchmark model explains with great accuracy the strikinglyhigh proportion of individuals who claim benefits exactly at the Early RetirementAge, while still explaining the increased claiming hazard at the Normal RetirementAge. We also discuss some policy experiments and their interplay with employmentuncertainty. Additionally, we analyze the effects of negative wealth shocks on thelabor supply and claiming decisions of older Americans. Our results can explainwhy early claiming has remained very high in the last years even as the early retirementpenalties have increased substantially compared with previous periods, andwhy labor force participation has remained quite high for older workers even in themidst of the worse employment crisis in decades.
Resumo:
We examine the relationship between institutions, culture and cyclical fluctuations for a sampleof 45 European, Middle Eastern and North African countries. Better governance is associated withshorter and less severe contractions and milder expansions. Certain cultural traits, such as lack ofacceptance of power distance and individualism, are also linked business cycle features. Businesscycle synchronization is tightly related to similarities in the institutional environment. Mediterraneancountries conform to these general tendencies.
Resumo:
This paper studies the determinants of school choice, focusing on the role of information. Weconsider how parents' search efforts and their capacity to process information (i.e., tocorrectly assess schools) affect the quality of the schools they choose for their children. Usinga novel dataset, we are able to identify parents' awareness of schools in their neighborhoodand measure their capacity to rank the quality of the school with respect to the officialrankings. We find that parents education and wealth are important factors in determiningtheir level of school awareness and information gathering. Moreover, these search effortshave important consequences in terms of the quality of school choice.
Resumo:
Structural equation models are widely used in economic, socialand behavioral studies to analyze linear interrelationships amongvariables, some of which may be unobservable or subject to measurementerror. Alternative estimation methods that exploit different distributionalassumptions are now available. The present paper deals with issues ofasymptotic statistical inferences, such as the evaluation of standarderrors of estimates and chi--square goodness--of--fit statistics,in the general context of mean and covariance structures. The emphasisis on drawing correct statistical inferences regardless of thedistribution of the data and the method of estimation employed. A(distribution--free) consistent estimate of $\Gamma$, the matrix ofasymptotic variances of the vector of sample second--order moments,will be used to compute robust standard errors and a robust chi--squaregoodness--of--fit squares. Simple modifications of the usual estimateof $\Gamma$ will also permit correct inferences in the case of multi--stage complex samples. We will also discuss the conditions under which,regardless of the distribution of the data, one can rely on the usual(non--robust) inferential statistics. Finally, a multivariate regressionmodel with errors--in--variables will be used to illustrate, by meansof simulated data, various theoretical aspects of the paper.
Resumo:
We study the effect of the business cycle on the health of newborn babies using 30 years of birth certificate data for Spain. Exploiting regional variation over time, we find that babies are born healthier when the local unemployment rate is high. Although fertility is lower during recessions, the effect on health is not the result of selection (healthier mothers being more likely to conceive when unemployment is high). We match multiple births to the same parents and find that the main result survives the inclusion of parents fixed-effects. We then explore a range of maternal behaviors as potential channels. Fertility-age women do not appear to engage in significantly healthier behaviors during recessions (in terms of exercise, nutrition, smoking and drinking). However, they are more likely to be out of work. Maternal employment during pregnancy is in turn negatively correlated with babies' health. We conclude that maternal employment is a plausible mediating channel.
Resumo:
This paper extends multivariate Granger causality to take into account the subspacesalong which Granger causality occurs as well as long run Granger causality. The propertiesof these new notions of Granger causality, along with the requisite restrictions, are derivedand extensively studied for a wide variety of time series processes including linear invertibleprocess and VARMA. Using the proposed extensions, the paper demonstrates that: (i) meanreversion in L2 is an instance of long run Granger non-causality, (ii) cointegration is a specialcase of long run Granger non-causality along a subspace, (iii) controllability is a special caseof Granger causality, and finally (iv) linear rational expectations entail (possibly testable)Granger causality restriction along subspaces.
Resumo:
Graphical displays which show inter--sample distances are importantfor the interpretation and presentation of multivariate data. Except whenthe displays are two--dimensional, however, they are often difficult tovisualize as a whole. A device, based on multidimensional unfolding, isdescribed for presenting some intrinsically high--dimensional displays infewer, usually two, dimensions. This goal is achieved by representing eachsample by a pair of points, say $R_i$ and $r_i$, so that a theoreticaldistance between the $i$-th and $j$-th samples is represented twice, onceby the distance between $R_i$ and $r_j$ and once by the distance between$R_j$ and $r_i$. Self--distances between $R_i$ and $r_i$ need not be zero.The mathematical conditions for unfolding to exhibit symmetry are established.Algorithms for finding approximate fits, not constrained to be symmetric,are discussed and some examples are given.
Resumo:
We extend to score, Wald and difference test statistics the scaled and adjusted corrections to goodness-of-fit test statistics developed in Satorra and Bentler (1988a,b). The theory is framed in the general context of multisample analysis of moment structures, under general conditions on the distribution of observable variables. Computational issues, as well as the relation of the scaled and corrected statistics to the asymptotic robust ones, is discussed. A Monte Carlo study illustrates thecomparative performance in finite samples of corrected score test statistics.
Resumo:
In this paper we propose a simple and general model for computing the Ramsey optimal inflation tax, which includes several models from the previous literature as special cases. We show that it cannot be claimed that the Friedman rule is always optimal (or always non--optimal) on theoretical grounds. The Friedman rule is optimal or not, depending on conditions related to the shape of various relevant functions. One contribution of this paper is to relate these conditions to {\it measurable} variables such as the interest rate or the consumption elasticity of money demand. We find that it tends to be optimal to tax money when there are economies of scale in the demand for money (the scale elasticity is smaller than one) and/or when money is required for the payment of consumption or wage taxes. We find that it tends to be optimal to tax money more heavily when the interest elasticity of money demand is small. We present empirical evidence on the parameters that determine the optimal inflation tax. Calibrating the model to a variety of empirical studies yields a optimal nominal interest rate of less than 1\%/year, although that finding is sensitive to the calibration.
Resumo:
The present paper proposes a model for the persistence of abnormal returnsboth at firm and industry levels, when longitudinal data for the profitsof firms classiffied as industries are available. The model produces a two-way variance decomposition of abnormal returns: (a) at firm versus industrylevels, and (b) for permanent versus transitory components. This variancedecomposition supplies information on the relative importance of thefundamental components of abnormal returns that have been discussed in theliterature. The model is applied to a Spanish sample of firms, obtainingresults such as: (a) there are significant and permanent differences betweenprofit rates both at industry and firm levels; (b) variation of abnormal returnsat firm level is greater than at industry level; and (c) firm and industry levelsdo not differ significantly regarding rates of convergence of abnormal returns.
Resumo:
It is widely accepted in the literature about the classicalCournot oligopoly model that the loss of quasi competitiveness is linked,in the long run as new firms enter the market, to instability of the equilibrium. In this paper, though, we present a model in which a stableunique symmetric equilibrium is reached for any number of oligopolistsas industry price increases with each new entry. Consequently, the suspicion that non quasi competitiveness implies, in the long run, instabilityis proved false.