53 resultados para Null hypothesis
em CentAUR: Central Archive University of Reading - UK
Resumo:
Whereas the predominance of El Niño Southern Oscillation (ENSO) mode in the tropical Pacific sea surface temperature (SST) variability is well established, no such consensus seems to have been reached by climate scientists regarding the Indian Ocean. While a number of researchers think that the Indian Ocean SST variability is dominated by an active dipolar-type mode of variability, similar to ENSO, others suggest that the variability is mostly passive and behaves like an autocorrelated noise. For example, it is suggested recently that the Indian Ocean SST variability is consistent with the null hypothesis of a homogeneous diffusion process. However, the existence of the basin-wide warming trend represents a deviation from a homogeneous diffusion process, which needs to be considered. An efficient way of detrending, based on differencing, is introduced and applied to the Hadley Centre ice and SST. The filtered SST anomalies over the basin (23.5N-29.5S, 30.5E-119.5E) are then analysed and found to be inconsistent with the null hypothesis on intraseasonal and interannual timescales. The same differencing method is then applied to the smaller tropical Indian Ocean domain. This smaller domain is also inconsistent with the null hypothesis on intraseasonal and interannual timescales. In particular, it is found that the leading mode of variability yields the Indian Ocean dipole, and departs significantly from the null hypothesis only in the autumn season.
Resumo:
The Cape Floristic Region is exceptionally species-rich both for its area and latitude, and this diversity is highly unevenly distributed among genera. The modern flora is hypothesized to result largely from recent (post-Oligocene) speciation, and it has long been speculated that particular species-poor lineages pre-date this burst of speciation. Here, we employ molecular phylogenetic data in combination with fossil calibrations to estimate the minimum duration of Cape occupation by 14 unrelated putative relicts. Estimates vary widely between lineages (7-101 Myr ago), and when compared with the estimated timing of onset of the modern flora's radiation, it is clear that many, but possibly not all, of these lineages pre-date its establishment. Statistical comparisons of diversities with lineage age show that low species diversity of many of the putative relicts results from a lower rate of diversification than in dated Cape radiations. In other putative relicts, however, we cannot reject the possibility that they diversify at the same underlying rate as the radiations, but have been present in the Cape for insufficient time to accumulate higher diversity. Although the extremes in diversity of currently dated Cape lineages fall outside expectations under a underlying diversification rate, sampling of all Cape lineages would be required to reject this null hypothesis.
Resumo:
Background: This study was carried out as part of a European Union funded project (PharmDIS-e+), to develop and evaluate software aimed at assisting physicians with drug dosing. A drug that causes particular problems with drug dosing in primary care is digoxin because of its narrow therapeutic range and low therapeutic index. Objectives: To determine (i) accuracy of the PharmDIS-e+ software for predicting serum digoxin levels in patients who are taking this drug regularly; (ii) whether there are statistically significant differences between predicted digoxin levels and those measured by a laboratory and (iii) whether there are differences between doses prescribed by general practitioners and those suggested by the program. Methods: We needed 45 patients to have 95% Power to reject the null hypothesis that the mean serum digoxin concentration was within 10% of the mean predicted digoxin concentration. Patients were recruited from two general practices and had been taking digoxin for at least 4 months. Exclusion criteria were dementia, low adherence to digoxin and use of other medications known to interact to a clinically important extent with digoxin. Results: Forty-five patients were recruited. There was a correlation of 0·65 between measured and predicted digoxin concentrations (P < 0·001). The mean difference was 0·12 μg/L (SD 0·26; 95% CI 0·04, 0·19, P = 0·005). Forty-seven per cent of the patients were prescribed the same dose as recommended by the software, 44% were prescribed a higher dose and 9% a lower dose than recommended. Conclusion: PharmDIS-e+ software was able to predict serum digoxin levels with acceptable accuracy in most patients.
Resumo:
Fluctuations in the solar wind plasma and magnetic field are well described by the sum of two power law distributions. It has been postulated that these distributions are the result of two independent processes: turbulence, which contributes mainly to the smaller fluctuations, and crossing the boundaries of flux tubes of coronal origin, which dominates the larger variations. In this study we explore the correspondence between changes in the magnetic field with changes in other solar wind properties. Changes in density and temperature may result from either turbulence or coronal structures, whereas changes in composition, such as the alpha-to-proton ratio are unlikely to arise from in-transit effects. Observations spanning the entire ACE dataset are compared with a null hypothesis of no correlation between magnetic field discontinuities and changes in other solar wind parameters. Evidence for coronal structuring is weaker than for in-transit turbulence, with only ∼ 25% of large magnetic field discontinuities associated with a significant change in the alpha-to-proton ratio, compared to ∼ 40% for significant density and temperature changes. However, note that a lack of detectable alpha-to-proton signature is not sufficient to discount a structure as having a solar origin.
Resumo:
This paper investigates the relationship between lease maturity and rent in commercial property. Over the last decade market-led changes to lease structures, the threat of government intervention and the associated emergence of the Codes of Practice for commercial leases have stimulated growing interest in pricing of commercial property leases. Seminal work by Grenadier (1995) derived a set of hypotheses about the pricing of different lease lengths in different market conditions. Whilst there is a compelling theoretical case for and a strong intuitive expectation of differential pricing of different lease maturities, to date the empirical evidence is inconclusive. Two Swedish studies have found mixed results (Gunnelin and Soderbergh 2003 and Englund et al 2003). In only half the cases is the null hypothesis that lease length has no effect rejected. In the UK, Crosby et al (2003) report counterintuitive results. In some markets, they find that short lease terms are associated with low rents, whilst in others they are associated with high rents. Drawing upon a substantial database of commercial lettings in central London (West End and City of London) over the last decade, we investigate the relationship between rent and lease maturity. In particular, we test whether a building quality variable omitted in previous studies provides empirical results that are more consistent with the theoretical and intuitive a priori expectations. It is found that initial leases rates are upward sloping with the lease term and that this relationship is constant over time.
Resumo:
This paper considers the effect of GARCH errors on the tests proposed byPerron (1997) for a unit root in the presence of a structural break. We assessthe impact of degeneracy and integratedness of the conditional varianceindividually and find that, apart from in the limit, the testing procedure isinsensitive to the degree of degeneracy but does exhibit an increasingover-sizing as the process becomes more integrated. When we consider the GARCHspecifications that we are likely to encounter in empirical research, we findthat the Perron tests are reasonably robust to the presence of GARCH and donot suffer from severe over-or under-rejection of a correct null hypothesis.
Resumo:
Many key economic and financial series are bounded either by construction or through policy controls. Conventional unit root tests are potentially unreliable in the presence of bounds, since they tend to over-reject the null hypothesis of a unit root, even asymptotically. So far, very little work has been undertaken to develop unit root tests which can be applied to bounded time series. In this paper we address this gap in the literature by proposing unit root tests which are valid in the presence of bounds. We present new augmented Dickey–Fuller type tests as well as new versions of the modified ‘M’ tests developed by Ng and Perron [Ng, S., Perron, P., 2001. LAG length selection and the construction of unit root tests with good size and power. Econometrica 69, 1519–1554] and demonstrate how these tests, combined with a simulation-based method to retrieve the relevant critical values, make it possible to control size asymptotically. A Monte Carlo study suggests that the proposed tests perform well in finite samples. Moreover, the tests outperform the Phillips–Perron type tests originally proposed in Cavaliere [Cavaliere, G., 2005. Limited time series with a unit root. Econometric Theory 21, 907–945]. An illustrative application to U.S. interest rate data is provided
Resumo:
Across five experiments, the temporal regularity and content of an irrelevant speech stream were varied and their effects on a serial recall task examined. Variations of the content, but not the rhythm, of the irrelevant speech stimuli reliably disrupted serial recall performance in all experiments. Bayesian analyses supported the null hypothesis over the hypothesis that irregular rhythms would disrupt memory to a greater extent than regular rhythms. Pooling the data in a combined analysis revealed that regular presentation of the irrelevant speech was significantly more disruptive to serial recall than irregular presentation. These results are consistent with the idea that auditory distraction is sensitive to both intra-item and inter-item relations and challenge an orienting-based account of auditory distraction.
Resumo:
This paper discusses the dangers inherent in allempting to simplify something as complex as development. It does this by exploring the Lynn and Vanhanen theory of deterministic development which asserts that varying levels of economic development seen between countries can be explained by differences in 'national intelligence' (national IQ). Assuming that intelligence is genetically determined, and as different races have been shown to have different IQ, then they argue that economic development (measured as GDP/capita) is largely a function of race and interventions to address imbalances can only have a limited impact. The paper presents the Lynne and Vanhanen case and critically discusses the data and analyses (linear regression) upon which it is based. It also extends the cause-effect basis of Lynne and Vanhanen's theory for economic development into human development by using the Human Development Index (HDI). It is argued that while there is nothing mathematically incorrect with their calculations, there are concerns over the data they employ. Even more fundamentally it is argued that statistically significant correlations between the various components of the HDI and national IQ can occur via a host of cause-effect pathways, and hence the genetic determinism theory is far from proven. The paper ends by discussing the dangers involved in the use of over-simplistic measures of development as a means of exploring cause-effect relationships. While the creators of development indices such as the HDI have good intentions, simplistic indices can encourage simplistic explanations of under-development. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Development geography has long sought to understand why inequalities exist and the best ways to address them. Dependency theory sets out an historical rationale for under development based on colonialism and a legacy of developed core and under-developed periphery. Race is relevant in this theory only insofar that Europeans are white and the places they colonised were occupied by people with darker skin colour. There are no innate biological reasons why it happened in that order. However, a new theory for national inequalities proposed by Lynn and Vanhanen in a series of publications makes the case that poorer countries have that status because of a poorer genetic stock rather than an accident of history. They argue that IQ has a genetic basis and IQ is linked to ability. Thus races with a poorer IQ have less ability, and thus national IQ can be positively correlated with performance as measured by an indicator like GDP/capita. Their thesis is one of despair, as little can be done to improve genetic stock significantly other than a programme of eugenics. This paper summarises and critiques the Lynn and Vanhanen hypothesis and the assumptions upon which it is based, and uses this analysis to show how a human desire to simplify in order to manage can be dangerous in development geography. While the attention may naturally be focused on the 'national IQ' variables as a proxy measure of 'innate ability', the assumption of GDP per capita as an indicator of 'success' and 'achievement' is far more readily accepted without criticism. The paper makes the case that the current vogue for indicators, indices and cause-effect can be tyrannical.
Resumo:
R. H. Whittaker's idea that plant diversity can be divided into a hierarchy of spatial components from alpha at the within-habitat scale through beta for the turnover of species between habitats to gamma along regional gradients implies the underlying existence of alpha, beta, and gamma niches. We explore the hypothesis that the evolution of a, (3, and gamma niches is also hierarchical, with traits that define the a niche being labile, while those defining a and 7 niches are conservative. At the a level we find support for the hypothesis in the lack of close significant phylogenetic relationship between meadow species that have similar a niches. In a second test, a niche overlap based on a variety of traits is compared between congeners and noncongeners in several communities; here, too, there is no evidence of a correlation between a niche and phylogeny. To test whether beta and gamma niches evolve conservatively, we reconstructed the evolution of relevant traits on evolutionary trees for 14 different clades. Tests against null models revealed a number of instances, including some in island radiations, in which habitat (beta niche) and elevational maximum (an aspect of the gamma niche) showed evolutionary conservatism.
Resumo:
Most current research into therapeutic approaches to muscle diseases involves the use of the mouse as an experimental model. Furthermore, a major strategy to alleviate myopathic symptoms through enhancing muscle growth and regeneration is to inhibit the action of myostatin (Mstn), a transforming growth factor-beta (TGF-beta) family member that inhibits muscle growth. Presently, however, no study has expanded the morphological analysis of mouse skeletal muscle beyond a few individual muscles of the distal hindlimb, through which broad conclusions have been based. Therefore, we have initially undertaken an expansive analysis of the skeletal musculature of the mouse forelimb and highlighted the species-specific differences between equivalent muscles of the rat, another prominently used experimental model. Subsequently, we examined the musculature of the forelimb in both young and old adult wild-type (mstn(+/+)) and myostatin null (mstn(-/-)) mice and assessed the potential beneficial and detrimental effects of myostatin deletion on muscle morphology and composition during the aging process. We showed that: (1) the forelimb muscles of the mouse display a more glycolytic phenotype than those of the rat; (2) in the absence of myostatin, the induced myofiber hyperplasia, hypertrophy, and glycolytic conversion all occur in a muscle-specific manner; and, importantly, (3) the loss of myostatin significantly alters the dynamics of postnatal muscle growth and impairs age-related oxidative myofiber conversion.
Resumo:
Many families of interspersed repetitive DNA elements, including human Alu and LINE (Long Interspersed Element) elements, have been proposed to have accumulated through repeated copying from a single source locus: the "master gene." The extent to which a master gene model is applicable has implications for the origin, evolution, and function of such sequences. One repetitive element family for which a convincing case for a master gene has been made is the rodent ID (identifier) elements. Here we devise a new test of the master gene model and use it to show that mouse ID element sequences are not compatible with a strict master gene model. We suggest that a single master gene is rarely, if ever, likely to be responsible for the accumulation of any repeat family.