941 resultados para Multiple Hypothesis Testing


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the main implications of the efficient market hypothesis (EMH) is that expected future returns on financial assets are not predictable if investors are risk neutral. In this paper we argue that financial time series offer more information than that this hypothesis seems to supply. In particular we postulate that runs of very large returns can be predictable for small time periods. In order to prove this we propose a TAR(3,1)-GARCH(1,1) model that is able to describe two different types of extreme events: a first type generated by large uncertainty regimes where runs of extremes are not predictable and a second type where extremes come from isolated dread/joy events. This model is new in the literature in nonlinear processes. Its novelty resides on two features of the model that make it different from previous TAR methodologies. The regimes are motivated by the occurrence of extreme values and the threshold variable is defined by the shock affecting the process in the preceding period. In this way this model is able to uncover dependence and clustering of extremes in high as well as in low volatility periods. This model is tested with data from General Motors stocks prices corresponding to two crises that had a substantial impact in financial markets worldwide; the Black Monday of October 1987 and September 11th, 2001. By analyzing the periods around these crises we find evidence of statistical significance of our model and thereby of predictability of extremes for September 11th but not for Black Monday. These findings support the hypotheses of a big negative event producing runs of negative returns in the first case, and of the burst of a worldwide stock market bubble in the second example. JEL classification: C12; C15; C22; C51 Keywords and Phrases: asymmetries, crises, extreme values, hypothesis testing, leverage effect, nonlinearities, threshold models

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper considers trade secrecy as an appropriation mechanism in the context ofb the US Economic Espionage Act (EEA) 1996. We examine the relation between trade secret intensity and firm size, using a cross section of 95 court cases. The paper builds on extant work in three respects. First, we create a unique body of evidence, using EEA prosecutions from 1996 to 2008. Second, we use an econometric approach to measurement, estimation and hypothesis testing. This allows us comprehensively to test the robustness of findings. Third, we focus on objectively measured valuations, instead of the subjective, self-reported values used elsewhere. We find a stable, robust value for the elasticity of trade secret intensity with respect to firm size, which indicates that a 10% reduction in firm size leads to a 7% increase in trade secret intensity. We find that this result is not sensitive to industrial sector, sample trimming, or functional form.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study we elicit agents’ prior information set regarding a public good, exogenously give information treatments to survey respondents and subsequently elicit willingness to pay for the good and posterior information sets. The design of this field experiment allows us to perform theoretically motivated hypothesis testing between different updating rules: non-informative updating, Bayesian updating, and incomplete updating. We find causal evidence that agents imperfectly update their information sets. We also field causal evidence that the amount of additional information provided to subjects relative to their pre-existing information levels can affect stated WTP in ways consistent overload from too much learning. This result raises important (though familiar) issues for the use of stated preference methods in policy analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Catherine Comiskey CI and Hypothesis tests part 2 Hypothesis Testing   - Developing Null and Alternative Hypotheses   - Type I and Type II Errors   - Population Mean:  s Known   - Population Mean:  s Unknown   - Population Proportion  

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Catherine Comiskey CI and Hypothesis tests part 1 Hypothesis Testing   -  Developing Null and Alternative Hypotheses   -  Type I and Type II Errors   -   Population Mean:  S Known   -  Population Mean:  S Unknown   -  Population Proportion  

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forest fire sequences can be modelled as a stochastic point process where events are characterized by their spatial locations and occurrence in time. Cluster analysis permits the detection of the space/time pattern distribution of forest fires. These analyses are useful to assist fire-managers in identifying risk areas, implementing preventive measures and conducting strategies for an efficient distribution of the firefighting resources. This paper aims to identify hot spots in forest fire sequences by means of the space-time scan statistics permutation model (STSSP) and a geographical information system (GIS) for data and results visualization. The scan statistical methodology uses a scanning window, which moves across space and time, detecting local excesses of events in specific areas over a certain period of time. Finally, the statistical significance of each cluster is evaluated through Monte Carlo hypothesis testing. The case study is the forest fires registered by the Forest Service in Canton Ticino (Switzerland) from 1969 to 2008. This dataset consists of geo-referenced single events including the location of the ignition points and additional information. The data were aggregated into three sub-periods (considering important preventive legal dispositions) and two main ignition-causes (lightning and anthropogenic causes). Results revealed that forest fire events in Ticino are mainly clustered in the southern region where most of the population is settled. Our analysis uncovered local hot spots arising from extemporaneous arson activities. Results regarding the naturally-caused fires (lightning fires) disclosed two clusters detected in the northern mountainous area.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several eco-toxicological studies have shown that insectivorous mammals, due to theirfeeding habits, easily accumulate high amounts of pollutants in relation to other mammal species. To assess the bio-accumulation levels of toxic metals and their in°uenceon essential metals, we quantified the concentration of 19 elements (Ca, K, Fe, B, P,S, Na, Al, Zn, Ba, Rb, Sr, Cu, Mn, Hg, Cd, Mo, Cr and Pb) in bones of 105 greaterwhite-toothed shrews (Crocidura russula) from a polluted (Ebro Delta) and a control(Medas Islands) area. Since chemical contents of a bio-indicator are mainly compositional data, conventional statistical analyses currently used in eco-toxicology can givemisleading results. Therefore, to improve the interpretation of the data obtained, weused statistical techniques for compositional data analysis to define groups of metalsand to evaluate the relationships between them, from an inter-population viewpoint.Hypothesis testing on the adequate balance-coordinates allow us to confirm intuitionbased hypothesis and some previous results. The main statistical goal was to test equalmeans of balance-coordinates for the two defined populations. After checking normality,one-way ANOVA or Mann-Whitney tests were carried out for the inter-group balances

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Considerable experimental evidence suggests that non-pecuniary motives must be addressed when modeling behavior in economic contexts. Recent models of non-pecuniary motives can be classified as either altruism- based, equity-based, or reciprocity-based. We estimate and compare leading approaches in these categories, using experimental data. We then offer a flexible approach that nests the above three approaches, thereby allowing for nested hypothesis testing and for determining the relative strength of each of the competing theories. In addition, the encompassing approach provides a functional form for utility in different settings without the restrictive nature of the approaches nested within it. Using this flexible form for nested tests, we find that intentional reciprocity, distributive concerns, and altruistic considerations all play a significant role in players' decisions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new method for constructing exact distribution-free tests (and confidence intervals) for variables that can generate more than two possible outcomes.This method separates the search for an exact test from the goal to create a non-randomized test. Randomization is used to extend any exact test relating to meansof variables with finitely many outcomes to variables with outcomes belonging to agiven bounded set. Tests in terms of variance and covariance are reduced to testsrelating to means. Randomness is then eliminated in a separate step.This method is used to create confidence intervals for the difference between twomeans (or variances) and tests of stochastic inequality and correlation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Minimax lower bounds for concept learning state, for example, thatfor each sample size $n$ and learning rule $g_n$, there exists a distributionof the observation $X$ and a concept $C$ to be learnt such that the expectederror of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimensionof the concept class. However, these bounds do not tell anything about therate of decrease of the error for a {\sl fixed} distribution--concept pair.\\In this paper we investigate minimax lower bounds in such a--stronger--sense.We show that for several natural $k$--parameter concept classes, includingthe class of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks, for any{\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distributionof $X$ and a fixed concept $C$ such that the expected error is larger thana constant times $k/n$ for {\sl infinitely many n}. We also obtain suchstrong minimax lower bounds for the tail distribution of the probabilityof error, which extend the corresponding minimax lower bounds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We obtain minimax lower and upper bounds for the expected distortionredundancy of empirically designed vector quantizers. We show that the meansquared distortion of a vector quantizer designed from $n$ i.i.d. datapoints using any design algorithm is at least $\Omega (n^{-1/2})$ awayfrom the optimal distortion for some distribution on a bounded subset of${\cal R}^d$. Together with existing upper bounds this result shows thatthe minimax distortion redundancy for empirical quantizer design, as afunction of the size of the training data, is asymptotically on the orderof $n^{1/2}$. We also derive a new upper bound for the performance of theempirically optimal quantizer.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present an exact test for whether two random variables that have known bounds on their support are negatively correlated. The alternative hypothesis is that they are not negatively correlated. No assumptions are made on the underlying distributions. We show by example that the Spearman rank correlation test as the competing exact test of correlation in nonparametric settings rests on an additional assumption on the data generating process without which it is not valid as a test for correlation.We then show how to test for the significance of the slope in a linear regression analysis that invovles a single independent variable and where outcomes of the dependent variable belong to a known bounded set.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a simple new hypothesis testing procedure, which,based on an independent sample drawn from a certain density, detects which of $k$ nominal densities is the true density is closest to, under the total variation (L_{1}) distance. Weobtain a density-free uniform exponential bound for the probability of false detection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Considerable experimental evidence suggests that non-pecuniary motivesmust be addressed when modeling behavior in economic contexts. Recentmodels of non-pecuniary motives can be classified as either altruism-based, equity-based, or reciprocity-based. We estimate and compareleading approaches in these categories, using experimental data. Wethen offer a flexible approach that nests the above three approaches,thereby allowing for nested hypothesis testing and for determiningthe relative strength of each of the competing theories. In addition,the encompassing approach provides a functional form for utility in different settings without the restrictive nature of the approaches nested within it. Using this flexible form for nested tests, we findthat intentional reciprocity, distributive concerns, and altruisticconsiderations all play a significant role in players' decisions.