967 resultados para phylogeographical hypothesis testing


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two experiments tested predictions from a theory in which processing load depends on relational complexity (RC), the number of variables related in a single decision. Tasks from six domains (transitivity, hierarchical classification, class inclusion, cardinality, relative-clause sentence comprehension, and hypothesis testing) were administered to children aged 3-8 years. Complexity analyses indicated that the domains entailed ternary relations (three variables). Simpler binary-relation (two variables) items were included for each domain. Thus RC was manipulated with other factors tightly controlled. Results indicated that (i) ternary-relation items were more difficult than comparable binary-relation items, (ii) the RC manipulation was sensitive to age-related changes, (iii) ternary relations were processed at a median age of 5 years, (iv) cross-task correlations were positive, with all tasks loading on a single factor (RC), (v) RC factor scores accounted for 80% (88%) of age-related variance in fluid intelligence (compositionality of sets), (vi) binary- and ternary-relation items formed separate complexity classes, and (vii) the RC approach to defining cognitive complexity is applicable to different content domains. (C) 2002 Elsevier Science (USA). All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Num mundo hipercompetitivo, a afirmação da virtuosidade tem enfrentado consideráveis resistências, sendo mesmo considerada como sinónimo de fraqueza ou ingenuidade. Todavia, e perante evidências dos potenciais perigos do exercício da liderança desprovido de valores, ética e moralidade, elevam-se as vozes em defesa de uma liderança virtuosa, capaz de aportar contributos significativamente positivos às organizações e seus colaboradores. Partindo desta premissa, esta investigação teve como objetivo analisar, com base nas perceções dos liderados, o impacto da liderança virtuosa no comprometimento organizacional, assim como o contributo deste último no desempenho individual. Sustentados numa metodologia quantitativa, inquirimos, numa primeira fase, 113 liderados provenientes de organizações localizadas no território português, com vista a apurar quais as virtudes que mais valorizavam num líder. Os dados para o teste de hipóteses foram recolhidos através da aplicação de uma bateria de testes junto de 351 liderados, também a exercer funções em organizações a operar em Portugal. Os resultados sugerem que as perceções dos liderados em torno de três dimensões de virtuosidade da liderança (liderança baseada em valores, perseverança e maturidade) contribuem para o comprometimento organizacional, sobretudo nas suas vertentes afetiva e normativa e, que este último, por sua vez, é capaz de influenciar positivamente o desempenho individual.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Probability and Statistics—Selected Problems is a unique book for senior undergraduate and graduate students to fast review basic materials in Probability and Statistics. Descriptive statistics are presented first, and probability is reviewed secondly. Discrete and continuous distributions are presented. Sample and estimation with hypothesis testing are presented in the last two chapters. The solutions for proposed excises are listed for readers to references.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the main implications of the efficient market hypothesis (EMH) is that expected future returns on financial assets are not predictable if investors are risk neutral. In this paper we argue that financial time series offer more information than that this hypothesis seems to supply. In particular we postulate that runs of very large returns can be predictable for small time periods. In order to prove this we propose a TAR(3,1)-GARCH(1,1) model that is able to describe two different types of extreme events: a first type generated by large uncertainty regimes where runs of extremes are not predictable and a second type where extremes come from isolated dread/joy events. This model is new in the literature in nonlinear processes. Its novelty resides on two features of the model that make it different from previous TAR methodologies. The regimes are motivated by the occurrence of extreme values and the threshold variable is defined by the shock affecting the process in the preceding period. In this way this model is able to uncover dependence and clustering of extremes in high as well as in low volatility periods. This model is tested with data from General Motors stocks prices corresponding to two crises that had a substantial impact in financial markets worldwide; the Black Monday of October 1987 and September 11th, 2001. By analyzing the periods around these crises we find evidence of statistical significance of our model and thereby of predictability of extremes for September 11th but not for Black Monday. These findings support the hypotheses of a big negative event producing runs of negative returns in the first case, and of the burst of a worldwide stock market bubble in the second example. JEL classification: C12; C15; C22; C51 Keywords and Phrases: asymmetries, crises, extreme values, hypothesis testing, leverage effect, nonlinearities, threshold models

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper considers trade secrecy as an appropriation mechanism in the context ofb the US Economic Espionage Act (EEA) 1996. We examine the relation between trade secret intensity and firm size, using a cross section of 95 court cases. The paper builds on extant work in three respects. First, we create a unique body of evidence, using EEA prosecutions from 1996 to 2008. Second, we use an econometric approach to measurement, estimation and hypothesis testing. This allows us comprehensively to test the robustness of findings. Third, we focus on objectively measured valuations, instead of the subjective, self-reported values used elsewhere. We find a stable, robust value for the elasticity of trade secret intensity with respect to firm size, which indicates that a 10% reduction in firm size leads to a 7% increase in trade secret intensity. We find that this result is not sensitive to industrial sector, sample trimming, or functional form.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study we elicit agents’ prior information set regarding a public good, exogenously give information treatments to survey respondents and subsequently elicit willingness to pay for the good and posterior information sets. The design of this field experiment allows us to perform theoretically motivated hypothesis testing between different updating rules: non-informative updating, Bayesian updating, and incomplete updating. We find causal evidence that agents imperfectly update their information sets. We also field causal evidence that the amount of additional information provided to subjects relative to their pre-existing information levels can affect stated WTP in ways consistent overload from too much learning. This result raises important (though familiar) issues for the use of stated preference methods in policy analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Catherine Comiskey CI and Hypothesis tests part 2 Hypothesis Testing   - Developing Null and Alternative Hypotheses   - Type I and Type II Errors   - Population Mean:  s Known   - Population Mean:  s Unknown   - Population Proportion  

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Catherine Comiskey CI and Hypothesis tests part 1 Hypothesis Testing   -  Developing Null and Alternative Hypotheses   -  Type I and Type II Errors   -   Population Mean:  S Known   -  Population Mean:  S Unknown   -  Population Proportion  

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forest fire sequences can be modelled as a stochastic point process where events are characterized by their spatial locations and occurrence in time. Cluster analysis permits the detection of the space/time pattern distribution of forest fires. These analyses are useful to assist fire-managers in identifying risk areas, implementing preventive measures and conducting strategies for an efficient distribution of the firefighting resources. This paper aims to identify hot spots in forest fire sequences by means of the space-time scan statistics permutation model (STSSP) and a geographical information system (GIS) for data and results visualization. The scan statistical methodology uses a scanning window, which moves across space and time, detecting local excesses of events in specific areas over a certain period of time. Finally, the statistical significance of each cluster is evaluated through Monte Carlo hypothesis testing. The case study is the forest fires registered by the Forest Service in Canton Ticino (Switzerland) from 1969 to 2008. This dataset consists of geo-referenced single events including the location of the ignition points and additional information. The data were aggregated into three sub-periods (considering important preventive legal dispositions) and two main ignition-causes (lightning and anthropogenic causes). Results revealed that forest fire events in Ticino are mainly clustered in the southern region where most of the population is settled. Our analysis uncovered local hot spots arising from extemporaneous arson activities. Results regarding the naturally-caused fires (lightning fires) disclosed two clusters detected in the northern mountainous area.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several eco-toxicological studies have shown that insectivorous mammals, due to theirfeeding habits, easily accumulate high amounts of pollutants in relation to other mammal species. To assess the bio-accumulation levels of toxic metals and their in°uenceon essential metals, we quantified the concentration of 19 elements (Ca, K, Fe, B, P,S, Na, Al, Zn, Ba, Rb, Sr, Cu, Mn, Hg, Cd, Mo, Cr and Pb) in bones of 105 greaterwhite-toothed shrews (Crocidura russula) from a polluted (Ebro Delta) and a control(Medas Islands) area. Since chemical contents of a bio-indicator are mainly compositional data, conventional statistical analyses currently used in eco-toxicology can givemisleading results. Therefore, to improve the interpretation of the data obtained, weused statistical techniques for compositional data analysis to define groups of metalsand to evaluate the relationships between them, from an inter-population viewpoint.Hypothesis testing on the adequate balance-coordinates allow us to confirm intuitionbased hypothesis and some previous results. The main statistical goal was to test equalmeans of balance-coordinates for the two defined populations. After checking normality,one-way ANOVA or Mann-Whitney tests were carried out for the inter-group balances

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Considerable experimental evidence suggests that non-pecuniary motives must be addressed when modeling behavior in economic contexts. Recent models of non-pecuniary motives can be classified as either altruism- based, equity-based, or reciprocity-based. We estimate and compare leading approaches in these categories, using experimental data. We then offer a flexible approach that nests the above three approaches, thereby allowing for nested hypothesis testing and for determining the relative strength of each of the competing theories. In addition, the encompassing approach provides a functional form for utility in different settings without the restrictive nature of the approaches nested within it. Using this flexible form for nested tests, we find that intentional reciprocity, distributive concerns, and altruistic considerations all play a significant role in players' decisions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a new method for constructing exact distribution-free tests (and confidence intervals) for variables that can generate more than two possible outcomes.This method separates the search for an exact test from the goal to create a non-randomized test. Randomization is used to extend any exact test relating to meansof variables with finitely many outcomes to variables with outcomes belonging to agiven bounded set. Tests in terms of variance and covariance are reduced to testsrelating to means. Randomness is then eliminated in a separate step.This method is used to create confidence intervals for the difference between twomeans (or variances) and tests of stochastic inequality and correlation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Minimax lower bounds for concept learning state, for example, thatfor each sample size $n$ and learning rule $g_n$, there exists a distributionof the observation $X$ and a concept $C$ to be learnt such that the expectederror of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimensionof the concept class. However, these bounds do not tell anything about therate of decrease of the error for a {\sl fixed} distribution--concept pair.\\In this paper we investigate minimax lower bounds in such a--stronger--sense.We show that for several natural $k$--parameter concept classes, includingthe class of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks, for any{\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distributionof $X$ and a fixed concept $C$ such that the expected error is larger thana constant times $k/n$ for {\sl infinitely many n}. We also obtain suchstrong minimax lower bounds for the tail distribution of the probabilityof error, which extend the corresponding minimax lower bounds.