199 resultados para Quantitative Methods
Resumo:
This paper analyzes the relationship between ethnic fractionalization, polarization, and conflict. In recent years many authors have found empirical evidence that ethnic fractionalization has a negative effect on growth. One mechanism that can explain this nexus is the effect of ethnic heterogeneity on rent-seeking activities and the increase in potential conflict, which is negative for investment. However the empirical evidence supporting the effect of ethnic fractionalization on the incidence of civil conflicts is very weak. Although ethnic fractionalization may be important for growth, we argue that the channel is not through an increase in potential ethnic conflict. We discuss the appropriateness of indices of polarization to capture conflictive dimensions. We develop a new measure of ethnic heterogeneity that satisfies the basic properties associated with the concept of polarization. The empirical section shows that this index of ethnic polarization is a significant variable in the explanation of the incidence of civil wars. This result is robust to the presence of other indicators of ethnic heterogeneity, other sources of data for the construction of the index, and other data structures.
Resumo:
I discuss the identifiability of a structural New Keynesian Phillips curve when it is embedded in a small scale dynamic stochastic general equilibrium model. Identification problems emerge because not all the structural parameters are recoverable from the semi-structural ones and because the objective functions I consider are poorly behaved. The solution and the moment mappings are responsible for the problems.
Resumo:
Let a class $\F$ of densities be given. We draw an i.i.d.\ sample from a density $f$ which may or may not be in $\F$. After every $n$, one must make a guess whether $f \in \F$ or not. A class is almost surely testable if there exists such a testing sequence such that for any $f$, we make finitely many errors almost surely. In this paper, several results are given that allowone to decide whether a class is almost surely testable. For example, continuity and square integrability are not testable, but unimodality, log-concavity, and boundedness by a given constant are.
Resumo:
We consider adaptive sequential lossy coding of bounded individual sequences when the performance is measured by the sequentially accumulated mean squared distortion. Theencoder and the decoder are connected via a noiseless channel of capacity $R$ and both are assumed to have zero delay. No probabilistic assumptions are made on how the sequence to be encoded is generated. For any bounded sequence of length $n$, the distortion redundancy is defined as the normalized cumulative distortion of the sequential scheme minus the normalized cumulative distortion of the best scalarquantizer of rate $R$ which is matched to this particular sequence. We demonstrate the existence of a zero-delay sequential scheme which uses common randomization in the encoder and the decoder such that the normalized maximum distortion redundancy converges to zero at a rate $n^{-1/5}\log n$ as the length of the encoded sequence $n$ increases without bound.
Resumo:
This paper presents a simple Optimised Search Heuristic for the Job Shop Scheduling problem that combines a GRASP heuristic with a branch-and-bound algorithm. The proposed method is compared with similar approaches and leads to better results in terms of solution quality and computing times.
Resumo:
The singular value decomposition and its interpretation as alinear biplot has proved to be a powerful tool for analysing many formsof multivariate data. Here we adapt biplot methodology to the specifficcase of compositional data consisting of positive vectors each of whichis constrained to have unit sum. These relative variation biplots haveproperties relating to special features of compositional data: the studyof ratios, subcompositions and models of compositional relationships. Themethodology is demonstrated on a data set consisting of six-part colourcompositions in 22 abstract paintings, showing how the singular valuedecomposition can achieve an accurate biplot of the colour ratios and howpossible models interrelating the colours can be diagnosed.
Resumo:
The case of two transition tables is considered, that is two squareasymmetric matrices of frequencies where the rows and columns of thematrices are the same objects observed at three different timepoints. Different ways of visualizing the tables, either separatelyor jointly, are examined. We generalize an existing idea where asquare matrix is descomposed into symmetric and skew-symmetric partsto two matrices, leading to a decomposition into four components: (1)average symmetric, (2) average skew-symmetric, (3) symmetricdifference from average, and (4) skew-symmetric difference fromaverage. The method is illustrated with an artificial example and anexample using real data from a study of changing values over threegenerations.
Resumo:
Over recent years, both governments and international aid organizations have been devoting large amounts of resources to simplifying the procedures for setting up and formalizing firms. Many of these actions have focused on reducing the initial costs of setting up the firm, disregarding the more important role of business registers as a source of reliable information for judges, government departments and, above all, other firms. This reliable information is essential for reducing transaction costs in future dealings with all sorts of economic agents, both public and private. The priorities of reform policies should therefore be thoroughly reviewed, stressing the value of the legal institutions rather than trivializing them as is often the case.
Resumo:
Dual scaling of a subjects-by-objects table of dominance data (preferences,paired comparisons and successive categories data) has been contrasted with correspondence analysis, as if the two techniques were somehow different. In this note we show that dual scaling of dominance data is equivalent to the correspondence analysis of a table which is doubled with respect to subjects. We also show that the results of both methods can be recovered from a principal components analysis of the undoubled dominance table which is centred with respect to subject means.
Resumo:
The classical binary classification problem is investigatedwhen it is known in advance that the posterior probability function(or regression function) belongs to some class of functions. We introduceand analyze a method which effectively exploits this knowledge. The methodis based on minimizing the empirical risk over a carefully selected``skeleton'' of the class of regression functions. The skeleton is acovering of the class based on a data--dependent metric, especiallyfitted for classification. A new scale--sensitive dimension isintroduced which is more useful for the studied classification problemthan other, previously defined, dimension measures. This fact isdemonstrated by performance bounds for the skeleton estimate in termsof the new dimension.
Resumo:
We propose a new family of density functions that possess both flexibilityand closed form expressions for moments and anti-derivatives, makingthem particularly appealing for applications. We illustrate its usefulnessby applying our new family to obtain density forecasts of U.S. inflation.Our methods generate forecasts that improve on standard methods based on AR-ARCH models relying on normal or Student's t-distributional assumptions.
Resumo:
Structural equation models (SEM) are commonly used to analyze the relationship between variables some of which may be latent, such as individual ``attitude'' to and ``behavior'' concerning specific issues. A number of difficulties arise when we want to compare a large number of groups, each with large sample size, and the manifest variables are distinctly non-normally distributed. Using an specific data set, we evaluate the appropriateness of the following alternative SEM approaches: multiple group versus MIMIC models, continuous versus ordinal variables estimation methods, and normal theory versus non-normal estimation methods. The approaches are applied to the ISSP-1993 Environmental data set, with the purpose of exploring variation in the mean level of variables of ``attitude'' to and ``behavior''concerning environmental issues and their mutual relationship across countries. Issues of both theoretical and practical relevance arise in the course of this application.
Resumo:
In 2007 the first Quality Enhancement Meeting on sampling in the European SocialSurvey (ESS) took place. The discussion focused on design effects and inteviewereffects in face-to-face interviews. Following the recomendations of this meeting theSpanish ESS team studied the impact of interviewers as a new element in the designeffect in the response s variance using the information of the correspondent SampleDesign Data Files. Hierarchical multilevel and cross-classified multilevel analysis areconducted in order to estimate the amount of responses variation due to PSU and tointerviewers for different questions in the survey. Factor such as the age of theinterviewer, gender, workload, training and experience and respondent characteristicssuch as age, gender, renuance to participate and their possible interactions are alsoincluded in the analysis of some specific questions like trust in politicians and trustin legal system . Some recomendations related to future sampling designs and thecontents of the briefing sessions are derived from this initial research.
Resumo:
We consider an agent who has to repeatedly make choices in an uncertainand changing environment, who has full information of the past, who discountsfuture payoffs, but who has no prior. We provide a learning algorithm thatperforms almost as well as the best of a given finite number of experts orbenchmark strategies and does so at any point in time, provided the agentis sufficiently patient. The key is to find the appropriate degree of forgettingdistant past. Standard learning algorithms that treat recent and distant pastequally do not have the sequential epsilon optimality property.
Resumo:
In this paper we argue that corporate social responsibility (CSR) to various stakeholders(customers, shareholders, employees, suppliers, and community) has a positive effect on globalbrand equity (BE). In addition, policies aimed at satisfying community interests help reinforcecredibility to social responsible polices with other stakeholders. We test these theoreticalcontentions using panel data comprised of 57 global brands originating from 10 countries (USA,Japan, South Korea, France, UK, Italy, Germany, Finland, Switzerland and the Netherlands) forthe period 2002 to 2008. Our findings show that CSR to each of the stakeholder groups has apositive impact on global BE. In addition, global brands that follow local social responsibilitypolicies over communities obtain strong positive benefits in terms of the generation of BE, as itenhances the positive effects of CSR to other stakeholders, particularly to customers. Therefore,for managers of global brands it is particularly productive for generating brand value to combineglobal strategies with the satisfaction of the interests of local communities.