202 resultados para [JEL:C22] Mathematical and Quantitative Methods - Econometric Methods: Single Equation Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we attempt to describe the general reasons behind the world populationexplosion in the 20th century. The size of the population at the end of the century inquestion, deemed excessive by some, was a consequence of a dramatic improvementin life expectancies, attributable, in turn, to scientific innovation, the circulation ofinformation and economic growth. Nevertheless, fertility is a variable that plays acrucial role in differences in demographic growth. We identify infant mortality, femaleeducation levels and racial identity as important exogenous variables affecting fertility.It is estimated that in poor countries one additional year of primary schooling forwomen leads to 0.614 child less per couple on average (worldwide). While it may bepossible to identify a global tendency towards convergence in demographic trends,particular attention should be paid to the case of Africa, not only due to its differentdemographic patterns, but also because much of the continent's population has yet toexperience improvement in quality of life generally enjoyed across the rest of theplanet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Method is offered that makes it possible to apply generalized canonicalcorrelations analysis (CANCOR) to two or more matrices of different row and column order. The new method optimizes the generalized canonical correlationanalysis objective by considering only the observed values. This is achieved byemploying selection matrices. We present and discuss fit measures to assessthe quality of the solutions. In a simulation study we assess the performance of our new method and compare it to an existing procedure called GENCOM,proposed by Green and Carroll. We find that our new method outperforms the GENCOM algorithm both with respect to model fit and recovery of the truestructure. Moreover, as our new method does not require any type of iteration itis easier to implement and requires less computation. We illustrate the methodby means of an example concerning the relative positions of the political parties inthe Netherlands based on provincial data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Principal curves have been defined Hastie and Stuetzle (JASA, 1989) assmooth curves passing through the middle of a multidimensional dataset. They are nonlinear generalizations of the first principalcomponent, a characterization of which is the basis for the principalcurves definition.In this paper we propose an alternative approach based on a differentproperty of principal components. Consider a point in the space wherea multivariate normal is defined and, for each hyperplane containingthat point, compute the total variance of the normal distributionconditioned to belong to that hyperplane. Choose now the hyperplaneminimizing this conditional total variance and look for thecorresponding conditional mean. The first principal component of theoriginal distribution passes by this conditional mean and it isorthogonal to that hyperplane. This property is easily generalized todata sets with nonlinear structure. Repeating the search from differentstarting points, many points analogous to conditional means are found.We call them principal oriented points. When a one-dimensional curveruns the set of these special points it is called principal curve oforiented points. Successive principal curves are recursively definedfrom a generalization of the total variance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By means of classical Itô's calculus we decompose option prices asthe sum of the classical Black-Scholes formula with volatility parameterequal to the root-mean-square future average volatility plus a term dueby correlation and a term due to the volatility of the volatility. Thisdecomposition allows us to develop first and second-order approximationformulas for option prices and implied volatilities in the Heston volatilityframework, as well as to study their accuracy. Numerical examples aregiven.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the existence of moments and the tail behaviour of the densitiesof storage processes. We give sufficient conditions for existence andnon-existence of moments using the integrability conditions ofsubmultiplicative functions with respect to Lévy measures. Then, we studythe asymptotical behavior of the tails of these processes using the concaveor convex envelope of the release rate function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, generalizing results in Alòs, León and Vives (2007b), we see that the dependence of jumps in the volatility under a jump-diffusion stochastic volatility model, has no effect on the short-time behaviour of the at-the-money implied volatility skew, although the corresponding Hull and White formula depends on the jumps. Towards this end, we use Malliavin calculus techniques for Lévy processes based on Løkka (2004), Petrou (2006), and Solé, Utzet and Vives (2007).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the prevalence of smoking has decreased to below 20%, health practitioners interest has shifted towards theprevalence of obesity, and reducing it is one of the major health challenges in decades to come. In this paper westudy the impact that the final product of the anti-smoking campaign, that is, smokers quitting the habit, had onaverage weight in the population. To these ends, we use data from the Behavioral Risk Factors Surveillance System,a large series of independent representative cross-sectional surveys. We construct a synthetic panel that allows us tocontrol for unobserved heterogeneity and we exploit the exogenous changes in taxes and regulations to instrumentthe endogenous decision to give up the habit of smoking. Our estimates, are very close to estimates issued in the 90sby the US Department of Health, and indicate that a 10% decrease in the incidence of smoking leads to an averageweight increase of 2.2 to 3 pounds, depending on choice of specification. In addition, we find evidence that the effectovershoots in the short run, although a significant part remains even after two years. However, when we split thesample between men and women, we only find a significant effect for men. Finally, the implicit elasticity of quittingsmoking to the probability of becoming obese is calculated at 0.58. This implies that the net benefit from reducingthe incidence of smoking by 1% is positive even though the cost to society is $0.6 billions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The generalization of simple (two-variable) correspondence analysis to more than two categorical variables, commonly referred to as multiple correspondence analysis, is neither obvious nor well-defined. We present two alternative ways of generalizing correspondence analysis, one based on the quantification of the variables and intercorrelation relationships, and the other based on the geometric ideas of simple correspondence analysis. We propose a version of multiple correspondence analysis, with adjusted principal inertias, as the method of choice for the geometric definition, since it contains simple correspondence analysis as an exact special case, which is not the situation of the standard generalizations. We also clarify the issue of supplementary point representation and the properties of joint correspondence analysis, a method that visualizes all two-way relationships between the variables. The methodology is illustrated using data on attitudes to science from the International Social Survey Program on Environment in 1993.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyzes the relationship between ethnic fractionalization, polarization, and conflict. In recent years many authors have found empirical evidence that ethnic fractionalization has a negative effect on growth. One mechanism that can explain this nexus is the effect of ethnic heterogeneity on rent-seeking activities and the increase in potential conflict, which is negative for investment. However the empirical evidence supporting the effect of ethnic fractionalization on the incidence of civil conflicts is very weak. Although ethnic fractionalization may be important for growth, we argue that the channel is not through an increase in potential ethnic conflict. We discuss the appropriateness of indices of polarization to capture conflictive dimensions. We develop a new measure of ethnic heterogeneity that satisfies the basic properties associated with the concept of polarization. The empirical section shows that this index of ethnic polarization is a significant variable in the explanation of the incidence of civil wars. This result is robust to the presence of other indicators of ethnic heterogeneity, other sources of data for the construction of the index, and other data structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dual scaling of a subjects-by-objects table of dominance data (preferences,paired comparisons and successive categories data) has been contrasted with correspondence analysis, as if the two techniques were somehow different. In this note we show that dual scaling of dominance data is equivalent to the correspondence analysis of a table which is doubled with respect to subjects. We also show that the results of both methods can be recovered from a principal components analysis of the undoubled dominance table which is centred with respect to subject means.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The classical binary classification problem is investigatedwhen it is known in advance that the posterior probability function(or regression function) belongs to some class of functions. We introduceand analyze a method which effectively exploits this knowledge. The methodis based on minimizing the empirical risk over a carefully selected``skeleton'' of the class of regression functions. The skeleton is acovering of the class based on a data--dependent metric, especiallyfitted for classification. A new scale--sensitive dimension isintroduced which is more useful for the studied classification problemthan other, previously defined, dimension measures. This fact isdemonstrated by performance bounds for the skeleton estimate in termsof the new dimension.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 2007 the first Quality Enhancement Meeting on sampling in the European SocialSurvey (ESS) took place. The discussion focused on design effects and inteviewereffects in face-to-face interviews. Following the recomendations of this meeting theSpanish ESS team studied the impact of interviewers as a new element in the designeffect in the response s variance using the information of the correspondent SampleDesign Data Files. Hierarchical multilevel and cross-classified multilevel analysis areconducted in order to estimate the amount of responses variation due to PSU and tointerviewers for different questions in the survey. Factor such as the age of theinterviewer, gender, workload, training and experience and respondent characteristicssuch as age, gender, renuance to participate and their possible interactions are alsoincluded in the analysis of some specific questions like trust in politicians and trustin legal system . Some recomendations related to future sampling designs and thecontents of the briefing sessions are derived from this initial research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider an agent who has to repeatedly make choices in an uncertainand changing environment, who has full information of the past, who discountsfuture payoffs, but who has no prior. We provide a learning algorithm thatperforms almost as well as the best of a given finite number of experts orbenchmark strategies and does so at any point in time, provided the agentis sufficiently patient. The key is to find the appropriate degree of forgettingdistant past. Standard learning algorithms that treat recent and distant pastequally do not have the sequential epsilon optimality property.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this paper is to compare the performance of twopredictive radiological models, logistic regression (LR) and neural network (NN), with five different resampling methods. One hundred and sixty-seven patients with proven calvarial lesions as the only known disease were enrolled. Clinical and CT data were used for LR and NN models. Both models were developed with cross validation, leave-one-out and three different bootstrap algorithms. The final results of each model were compared with error rate and the area under receiver operating characteristic curves (Az). The neural network obtained statistically higher Az than LR with cross validation. The remaining resampling validation methods did not reveal statistically significant differences between LR and NN rules. The neural network classifier performs better than the one based on logistic regression. This advantage is well detected by three-fold cross-validation, but remains unnoticed when leave-one-out or bootstrap algorithms are used.