108 resultados para joint hypothesis tests
Resumo:
We extend to score, Wald and difference test statistics the scaled and adjusted corrections to goodness-of-fit test statistics developed in Satorra and Bentler (1988a,b). The theory is framed in the general context of multisample analysis of moment structures, under general conditions on the distribution of observable variables. Computational issues, as well as the relation of the scaled and corrected statistics to the asymptotic robust ones, is discussed. A Monte Carlo study illustrates thecomparative performance in finite samples of corrected score test statistics.
Resumo:
This article presents, discusses and tests the hypothesis that it is the number of parties what can explain the choice of electoral systems, rather than the other way round. Already existing political parties tend to choose electoral systems that, rather than generate new party systems by themselves, will crystallize, consolidate or reinforce previously existing party configurations. A general model develops the argument and presents the concept of 'behavioral-institutional equilibrium' to account for the relation between electoral systems and party systems. The most comprehensive dataset and test of these notions to date, encompassing 219 elections in 87 countries since the 19th century, are presented. The analysis gives strong support to the hypotheses that political party configurations dominated by a few parties tend to establish majority rule electoral systems, while multiparty systems already existed before the introduction of proportional representation. It also offers the new theoretical proposition that strategic party choice of electoral systems leads to a general trend toward proportional representation over time.
Resumo:
This paper presents a case study of a well-informed investor in the South Sea bubble. We argue that Hoare's Bank, a fledgling West End London banker, knew that a bubble was in progress and nonetheless invested in the stock; it was profitable to "ride the bubble." Using a unique dataset on daily trades, we show that this sophisticated investor was not constrained by institutional factors such as restrictions on short sales or agency problems. Instead, this study demonstrates that predictable investor sentiment can prevent attacks on a bubble; rational investors may only attack when some coordinating event promotes joint action.
Resumo:
Random coefficient regression models have been applied in differentfields and they constitute a unifying setup for many statisticalproblems. The nonparametric study of this model started with Beranand Hall (1992) and it has become a fruitful framework. In thispaper we propose and study statistics for testing a basic hypothesisconcerning this model: the constancy of coefficients. The asymptoticbehavior of the statistics is investigated and bootstrapapproximations are used in order to determine the critical values ofthe test statistics. A simulation study illustrates the performanceof the proposals.
Resumo:
Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.
Resumo:
The well-known lack of power of unit root tests has often been attributed to the shortlength of macroeconomic variables and also to DGP s that depart from the I(1)-I(0)alternatives. This paper shows that by using long spans of annual real GNP and GNPper capita (133 years) high power can be achieved, leading to the rejection of both theunit root and the trend-stationary hypothesis. This suggests that possibly neither modelprovides a good characterization of these data. Next, more flexible representations areconsidered, namely, processes containing structural breaks (SB) and fractional ordersof integration (FI). Economic justification for the presence of these features in GNP isprovided. It is shown that the latter models (FI and SB) are in general preferred to theARIMA (I(1) or I(0)) ones. As a novelty in this literature, new techniques are appliedto discriminate between FI and SB models. It turns out that the FI specification ispreferred, implying that GNP and GNP per capita are non-stationary, highly persistentbut mean-reverting series. Finally, it is shown that the results are robust when breaksin the deterministic component are allowed for in the FI model. Some macroeconomicimplications of these findings are also discussed.
Resumo:
When to allow Research Joint Ventures (RJVs) or not is an importantinstrument in the development of an optimal R&D policy. Theregulator, however, is unlikely to know all the relevant informationto regulate R&D optimally. The extent to which there existappropriability problems between the firms is one such variable thatis private information to the firms in the industry. In a duopolysetting we analyze the characteristics of a second-best R&D policywhere the government can either allow RJVs or not and give lump-sumsubsidies to the parties involved. The second-best R&D policy withoutsubsidies will either block some welfare improving RJVs or allow somewelfare reducing ones. With lump-sum subsidies, the second-best policytrades off the expected subsidy cost with allowing welfare decreasingRJVs or blocking welfare increasing ones.
Resumo:
We consider the agency problem of a staff member managing microfinancing programs, who can abuse his discretion to embezzle borrowers' repayments. The fact that most borrowers of microfinancing programs are illiterate and live in rural areas where transportation costs are very high make staff's embezzlement particularly relevant as is documented by Mknelly and Kevane (2002). We study the trade-off between the optimal rigid lending contract and the optimal discretionary one and find that a rigid contract is optimal when the audit cost is larger than gains from insurance. Our analysis explains rigid repayment schedules used by the Grameen bank as an optimal response to the bank staff's agency problem. Joint liability reduces borrowers' burden of respecting the rigid repayment schedules by providing them with partial insurance. However, the same insurance can be provided byborrowers themselves under individual liability through a side-contract.
Resumo:
The aim of this paper is to test formally the classical business cyclehypothesis, using data from industrialized countries for the timeperiod since 1960. The hypothesis is characterized by the view that the cyclical structure in GDP is concentrated in the investment series: fixed investment has typically a long cycle, while the cycle in inventory investment is shorter. To check the robustness of our results, we subject the data for 15 OECD countries to a variety of detrending techniques. While the hypothesis is not confirmed uniformly for all countries, there is a considerably high number for which the data display the predicted pattern. None of the countries shows a pattern which can be interpreted as a clear rejection of the classical hypothesis.
Resumo:
Departures from pure self interest in economic experiments have recently inspired models of "social preferences". We conduct experiments on simple two-person and three-person games with binary choices that test these theories more directly than the array of games conventionally considered. Our experiments show strong support for the prevalence of "quasi-maximin" preferences: People sacrifice to increase the payoffs for all recipients, but especially for the lowest-payoff recipients. People are also motivated by reciprocity: While people are reluctant to sacrifice to reciprocate good or bad behavior beyond what they would sacrifice for neutral parties, they withdraw willingness to sacrifice to achieve a fair outcome when others are themselves unwilling to sacrifice. Some participants are averse to getting different payoffs than others, but based on our experiments and reinterpretation of previous experiments we argue that behavior that has been presented as "difference aversion" in recent papers is actually a combination of reciprocal and quasi-maximin motivations. We formulate a model in which each player is willing to sacrifice to allocate the quasi-maximin allocation only to those players also believed to be pursuing the quasi-maximin allocation, and may sacrifice to punish unfair players.
Resumo:
Considerable experimental evidence suggests that non-pecuniary motivesmust be addressed when modeling behavior in economic contexts. Recentmodels of non-pecuniary motives can be classified as either altruism-based, equity-based, or reciprocity-based. We estimate and compareleading approaches in these categories, using experimental data. Wethen offer a flexible approach that nests the above three approaches,thereby allowing for nested hypothesis testing and for determiningthe relative strength of each of the competing theories. In addition,the encompassing approach provides a functional form for utility in different settings without the restrictive nature of the approaches nested within it. Using this flexible form for nested tests, we findthat intentional reciprocity, distributive concerns, and altruisticconsiderations all play a significant role in players' decisions.
Resumo:
The generalization of simple (two-variable) correspondence analysis to more than two categorical variables, commonly referred to as multiple correspondence analysis, is neither obvious nor well-defined. We present two alternative ways of generalizing correspondence analysis, one based on the quantification of the variables and intercorrelation relationships, and the other based on the geometric ideas of simple correspondence analysis. We propose a version of multiple correspondence analysis, with adjusted principal inertias, as the method of choice for the geometric definition, since it contains simple correspondence analysis as an exact special case, which is not the situation of the standard generalizations. We also clarify the issue of supplementary point representation and the properties of joint correspondence analysis, a method that visualizes all two-way relationships between the variables. The methodology is illustrated using data on attitudes to science from the International Social Survey Program on Environment in 1993.
Resumo:
I discuss the identifiability of a structural New Keynesian Phillips curve when it is embedded in a small scale dynamic stochastic general equilibrium model. Identification problems emerge because not all the structural parameters are recoverable from the semi-structural ones and because the objective functions I consider are poorly behaved. The solution and the moment mappings are responsible for the problems.
Resumo:
The paper analyzes the determinants of the optimal scope of incorporation in the presenceof bankruptcy costs. Bankruptcy costs alone generate a non-trivial tradeoff between thebenefit of coinsurance and the cost of risk contamination associated to joint financing corporate projects through debt. This tradeoff is characterized for projects with binary returns,depending on the distributional characteristics of returns (mean, variability, skewness, heterogeneity, correlation, and number of projects), the bankruptcy recovery rate, and the taxrate advantage of debt relative to equity. Our testable predictions are broadly consistentwith existing empirical evidence on conglomerate mergers, spin-offs, project finance, andsecuritization.
The economic effects of the Protestant Reformation: Testing the Weber hypothesis in the German Lands
Resumo:
Many theories, most famously Max Weber s essay on the Protestant ethic, have hypothesizedthat Protestantism should have favored economic development. With their considerablereligious heterogeneity and stability of denominational affiliations until the 19th century, theGerman Lands of the Holy Roman Empire present an ideal testing ground for this hypothesis.Using population figures in a dataset comprising 272 cities in the years 1300 1900, I find no effectsof Protestantism on economic growth. The finding is robust to the inclusion of a varietyof controls, and does not appear to depend on data selection or small sample size. In addition,Protestantism has no effect when interacted with other likely determinants of economic development.I also analyze the endogeneity of religious choice; instrumental variables estimates ofthe effects of Protestantism are similar to the OLS results.