93 resultados para Multiple play
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
It is common in econometric applications that several hypothesis tests arecarried out at the same time. The problem then becomes how to decide whichhypotheses to reject, accounting for the multitude of tests. In this paper,we suggest a stepwise multiple testing procedure which asymptoticallycontrols the familywise error rate at a desired level. Compared to relatedsingle-step methods, our procedure is more powerful in the sense that itoften will reject more false hypotheses. In addition, we advocate the useof studentization when it is feasible. Unlike some stepwise methods, ourmethod implicitly captures the joint dependence structure of the teststatistics, which results in increased ability to detect alternativehypotheses. We prove our method asymptotically controls the familywise errorrate under minimal assumptions. We present our methodology in the context ofcomparing several strategies to a common benchmark and deciding whichstrategies actually beat the benchmark. However, our ideas can easily beextended and/or modied to other contexts, such as making inference for theindividual regression coecients in a multiple regression framework. Somesimulation studies show the improvements of our methods over previous proposals. We also provide an application to a set of real data.
Resumo:
The first generation models of currency crises have often been criticized because they predict that, in the absence of very large triggering shocks, currency attacks should be predictable and lead to small devaluations. This paper shows that these features of first generation models are not robust to the inclusion of private information. In particular, this paper analyzes a generalization of the Krugman-Flood-Garber (KFG) model, which relaxes the assumption that all consumers are perfectly informed about the level of fundamentals. In this environment, the KFG equilibrium of zero devaluation is only one of many possible equilibria. In all the other equilibria, the lack of perfect information delays the attack on the currency past the point at which the shadow exchange rate equals the peg, giving rise to unpredictable and discrete devaluations.
Illusory correlation in the remuneration of chief executive officers: It pays to play golf, and well
Resumo:
Illusory correlation refers to the use of information in decisions that is uncorrelated with the relevantcriterion. We document illusory correlation in CEO compensation decisions by demonstrating thatinformation, that is uncorrelated with corporate performance, is related to CEO compensation. We usepublicly available data from the USA for the years 1998, 2000, 2002, and 2004 to examine the relationsbetween golf handicaps of CEOs and corporate performance, on the one hand, and CEO compensationand golf handicaps, on the other hand. Although we find no relation between handicap and corporateperformance, we do find a relation between handicap and CEO compensation. In short, golfers earnmore than non-golfers and pay increases with golfing ability. We relate these findings to the difficultiesof judging compensation for CEOs. To overcome this and possibly other illusory correlations inthese kinds of decisions, we recommend the use of explicit, mechanical decision rules.
Resumo:
The effectiveness of pre-play communication in achieving efficientoutcomes has long been a subject of controversy. In some environments,cheap talk may help to achieve coordination. However, Aumannconjectures that, in a variant of the Stag Hunt game, a signal forefficient play is not self-enforcing and concludes that an "agreementto play [the efficient outcome] conveys no information about what theplayers will do." Harsanyi and Selten (1988) cite this example as anillustration of risk-dominance vs. payoff-dominance. Farrell and Rabin(1996) agree with the logic, but suspect that cheap talk willnonetheless achieve efficiency. The conjecture is tested with one-waycommunication. When the sender first chooses a signal and then anaction, there is impressive coordination: a 94% probability for thepotentially efficient (but risky) play, given a signal for efficientplay. Without communication, efforts to achieve efficiency wereunsuccessful, as the proportion of B moves is only 35%. I also test ahypothesis that the order of the action and the signal affects theresults, finding that the decision order is indeed important. WhileAumann s conjecture is behaviorally disconfirmed when the signal isdetermined initially, the signal s credibility seems to be much moresuspect when the sender is known to have first chosen an action, andthe results are not statistically distinguishable from those whenthere is no signal. Some applications and issues in communication andcoordination are discussed.
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.
Resumo:
The generalization of simple correspondence analysis, for two categorical variables, to multiple correspondence analysis where they may be three or more variables, is not straighforward, both from a mathematical and computational point of view. In this paper we detail the exact computational steps involved in performing a multiple correspondence analysis, including the special aspects of adjusting the principal inertias to correct the percentages of inertia, supplementary points and subset analysis. Furthermore, we give the algorithm for joint correspondence analysis where the cross-tabulations of all unique pairs of variables are analysed jointly. The code in the R language for every step of the computations is given, as well as the results of each computation.
Resumo:
In the analysis of multivariate categorical data, typically the analysis of questionnaire data, it is often advantageous, for substantive and technical reasons, to analyse a subset of response categories. In multiple correspondence analysis, where each category is coded as a column of an indicator matrix or row and column of Burt matrix, it is not correct to simply analyse the corresponding submatrix of data, since the whole geometric structure is different for the submatrix . A simple modification of the correspondence analysis algorithm allows the overall geometric structure of the complete data set to be retained while calculating the solution for the selected subset of points. This strategy is useful for analysing patterns of response amongst any subset of categories and relating these patterns to demographic factors, especially for studying patterns of particular responses such as missing and neutral responses. The methodology is illustrated using data from the International Social Survey Program on Family and Changing Gender Roles in 1994.
Resumo:
We consider an economy where the production technology has constantreturns to scale but where in the descentralized equilibrium thereare aggregate increasing returns to scale. The result follows froma positive contracting externality among firms. If a firms issurrounded by more firms, employees have more opportunitiesoutside their own firm. This improves employees' incentives toinvest in the presence of ex post renegotiation at the firm level,at not cost. Our leading result is that if a region is sparselypopulated or if the degree of development in the region is lowenough, there are multiple equilibria in the level of sectorialemployment. From the theoretical model we derive a non-linearfirst-order censored difference equation for sectoral employment.Our results are strongly consistent with the multiple equilibriahypothesis and the existence of a sectoral critical scale (belowwich the sector follows a delocation process). The scale of theregions' population and the degree of development reduce thecritical scale of the sector.
Resumo:
The generalization of simple (two-variable) correspondence analysis to more than two categorical variables, commonly referred to as multiple correspondence analysis, is neither obvious nor well-defined. We present two alternative ways of generalizing correspondence analysis, one based on the quantification of the variables and intercorrelation relationships, and the other based on the geometric ideas of simple correspondence analysis. We propose a version of multiple correspondence analysis, with adjusted principal inertias, as the method of choice for the geometric definition, since it contains simple correspondence analysis as an exact special case, which is not the situation of the standard generalizations. We also clarify the issue of supplementary point representation and the properties of joint correspondence analysis, a method that visualizes all two-way relationships between the variables. The methodology is illustrated using data on attitudes to science from the International Social Survey Program on Environment in 1993.
Resumo:
We propose a method to estimate time invariant cyclical DSGE models using the informationprovided by a variety of filters. We treat data filtered with alternative procedures as contaminated proxies of the relevant model-based quantities and estimate structural and non-structuralparameters jointly using a signal extraction approach. We employ simulated data to illustratethe properties of the procedure and compare our conclusions with those obtained when just onefilter is used. We revisit the role of money in the transmission of monetary business cycles.
Resumo:
This paper shows how risk may aggravate fluctuations in economies with imperfect insurance and multiple assets. A two period job matching model is studied, in which risk averse agents act both as workers and as entrepreneurs. They choose between two types of investment: one type is riskless, while the other is a risky activity that creates jobs.Equilibrium is unique under full insurance. If investment is fully insured but unemployment risk is uninsured, then precautionary saving behavior dampens output fluctuations. However, if both investment and employment are uninsured, then an increase in unemployment gives agents an incentive to shift investment away from the risky asset, further increasing unemployment. This positive feedback may lead to multiple Pareto ranked equilibria. An overlapping generations version of the model may exhibit poverty traps or persistent multiplicity. Greater insurance is doubly beneficial in this context since it can both prevent multiplicity and promote risky investment.
Resumo:
We propose a model and solution methods, for locating a fixed number ofmultiple-server, congestible common service centers or congestible publicfacilities. Locations are chosen so to minimize consumers congestion (orqueuing) and travel costs, considering that all the demand must be served.Customers choose the facilities to which they travel in order to receiveservice at minimum travel and congestion cost. As a proxy for thiscriterion, total travel and waiting costs are minimized. The travel costis a general function of the origin and destination of the demand, whilethe congestion cost is a general function of the number of customers inqueue at the facilities.
Resumo:
This paper presents findings from a study investigating a firm s ethical practices along the value chain. In so doing we attempt to better understand potential relationships between a firm s ethical stance with its customers and those of its suppliers within a supply chain and identify particular sectoral and cultural influences that might impinge on this. Drawing upon a database comprising of 667 industrial firms from 27 different countries, we found that ethical practices begin with the firm s relationship with its customers, the characteristics of which then influence the ethical stance with the firm s suppliers within the supply chain. Importantly, market structure along with some key cultural characteristics were also found to exert significant influence on the implementation of ethical policies in these firms.
Resumo:
Concurs d'idees per a l'ordenació de l'àmbit sector 'Eixample Nord' al terme municipal de Vilanova i la Geltrú