911 resultados para Null Hypothesis
Resumo:
In the light of Project MATCH, is it reasonable to accept the null hypothesis that there are no clinically signi® cant matching effects between patient characteristics and cognitive± behaviour therapy (CBT), motivational enhancement therapy (MET) and Twelve-Step facilitation therapy (TSF)? The Project MATCH investigators considered the null hypothesis but preferred the alternative hypothesis that further analysis may reveal combinations of patient and therapist characteristics that show more substantial matching effects than any of the variables that they have examined to date.1
Resumo:
This paper develops a new test of true versus spurious long memory, based on log-periodogram estimation of the long memory parameter using skip-sampled data. A correction factor is derived to overcome the bias in this estimator due to aliasing. The procedure is designed to be used in the context of a conventional test of significance of the long memory parameter, and composite test procedure described that has the properties of known asymptotic size and consistency. The test is implemented using the bootstrap, with the distribution under the null hypothesis being approximated using a dependent-sample bootstrap technique to approximate short-run dependence following fractional differencing. The properties of the test are investigated in a set of Monte Carlo experiments. The procedure is illustrated by applications to exchange rate volatility and dividend growth series.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices.
Resumo:
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Resumo:
The Paraná-Paraguay basin encompasses central western Brazil, northeastern Paraguay, eastern Bolivia and northern Argentina. The Pantanal is a flooded plain with marked dry and rainy seasons that, due to its soil characteristics and low declivity, has a great water holding capacity supporting abundant fish fauna. Piaractus mesopotamicus, or pacu, endemic of the Paraná-Paraguay basin, is a migratory species economically important in fisheries and ecologically as a potential seed disperser. In this paper we employ eight microsatellite loci to assess the population structure of 120 pacu sampled inside and outside the Pantanal of Mato Grosso. Our main objective was to test the null hypothesis of panmixia and to verify if there was a different structuring pattern between the Pantanal were there were no physical barriers to fish movement and the heavily impounded Paraná and Paranapanema rivers. All loci had moderate to high levels of polymorphism, the number of alleles varied from three to 18. The average observed heterozygosity varied from 0.068 to 0.911. After the Bonferroni correction three loci remained significant for deviations from Hardy-Weinberg, and for those the frequency of null alleles was estimated. F ST and R ST pairwise comparisons detected low divergence among sampling sites, and differentiation was significant only between Paranapanema and Cuiabá and Paranapanema and Taquari. No correlation between genetic distance and the natural logarithm of the geographic distance was detected. Results indicate that for conservation purposes and for restoration programs small genetic differences detected in the Cuiabá and Paranapanema rivers should be taken in consideration.
Resumo:
Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Resumo:
Chagas disease is still a major public health problem in Latin America. Its causative agent, Trypanosoma cruzi, can be typed into three major groups, T. cruzi I, T. cruzi II and hybrids. These groups each have specific genetic characteristics and epidemiological distributions. Several highly virulent strains are found in the hybrid group; their origin is still a matter of debate. The null hypothesis is that the hybrids are of polyphyletic origin, evolving independently from various hybridization events. The alternative hypothesis is that all extant hybrid strains originated from a single hybridization event. We sequenced both alleles of genes encoding EF-1 alpha, actin and SSU rDNA of 26 T. cruzi strains and DHFR-TS and TR of 12 strains. This information was used for network genealogy analysis and Bayesian phylogenies. We found T. cruzi I and T. cruzi II to be monophyletic and that all hybrids had different combinations of T. cruzi I and T. cruzi II haplotypes plus hybrid-specific haplotypes. Bootstrap values (networks) and posterior probabilities (Bayesian phylogenies) of clades supporting the monophyly of hybrids were far below the 95% confidence interval, indicating that the hybrid group is polyphyletic. We hypothesize that T. cruzi I and T. cruzi II are two different species and that the hybrids are extant representatives of independent events of genome hybridization, which sporadically have sufficient fitness to impact on the epidemiology of Chagas disease.
Resumo:
This study evaluates the impacts of Brazilian highway conditions on fuel consumption and, consequently, on carbon dioxide (COO emissions. For the purpose of this study, highway conditions refer to the level of highway maintenance: the incidence of large potholes, large surface cracks, uneven sections, and debris. Primary computer collected data related to the fuel consumption of three types of trucks were analyzed. The data were derived from 88 trips taken over six routes, each route representative of one of two highway conditions: better or worse. Study results are initially presented for each type of truck being monitored. The results are then aggregated to approximate the entire Brazilian highway network. In all cases, results confirmed environmental benefits resulting from travel over the better routes. There was found to be an increase in energy efficiency from traveling better roads, which resulted in lower fuel consumption and lower CO(2) emissions. Statistical analysis of the results suggests that, in general, fuel consumption data were significant at *P < 0.05, rejecting the null hypothesis that average fuel consumption from traveling the better routes is statistically equal to average fuel consumption from traveling the worse routes. Improved Brazilian road conditions would generate economic benefits, reduce dependency on and consumption of fossil fuels (due to the increase in energy efficiency), and reduce CO(2) emissions. These findings may have additional relevancy if Brazil needs to reduce carbon dioxide emissions to reach future Kyoto Protocol`s emissions targets, which should take effect in January 2013. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
We develop a test of evolutionary change that incorporates a null hypothesis of homogeneity, which encompasses time invariance in the variance and autocovariance structure of residuals from estimated econometric relationships. The test framework is based on examining whether shifts in spectral decomposition between two frames of data are significant. Rejection of the null hypothesis will point not only to weak nonstationarity but to shifts in the structure of the second-order moments of the limiting distribution of the random process. This would indicate that the second-order properties of any underlying attractor set has changed in a statistically significant way, pointing to the presence of evolutionary change. A demonstration of the test's applicability to a real-world macroeconomic problem is accomplished by applying the test to the Australian Building Society Deposits (ABSD) model.
Resumo:
This paper considers a stochastic frontier production function which has additive, heteroscedastic error structure. The model allows for negative or positive marginal production risks of inputs, as originally proposed by Just and Pope (1978). The technical efficiencies of individual firms in the sample are a function of the levels of the input variables in the stochastic frontier, in addition to the technical inefficiency effects. These are two features of the model which are not exhibited by the commonly used stochastic frontiers with multiplicative error structures, An empirical application is presented using cross-sectional data on Ethiopian peasant farmers. The null hypothesis of no technical inefficiencies of production among these farmers is accepted. Further, the flexible risk models do not fit the data on peasant farmers as well as the traditional stochastic frontier model with multiplicative error structure.
Resumo:
The long-term effectiveness of chlorhexidine as a matrix metalloproteinase (MMP) inhibitor may be compromised when water is incompletely removed during dentin bonding. This study challenged this anti-bond degradation strategy by testing the null hypothesis that wet-bonding with water or ethanol has no effect on the effectiveness of chlorhexidine in preventing hybrid layer degradation over an 18-month period. Acid-etched dentin was bonded under pulpal pressure simulation with Scotchbond MP and Single Bond 2, with water wet-bonding or with a hydrophobic adhesive with ethanol wet-bonding, with or without pre-treatment with chlorhexidine diacetate (CHD). Resin-dentin beams were prepared for bond strength and TEM evaluation after 24 hrs and after aging in artificial saliva for 9 and 18 mos. Bonds made to ethanol-saturated dentin did not change over time with preservation of hybrid layer integrity. Bonds made to CHD pre-treated acid-etched dentin with commercial adhesives with water wet-bonding were preserved after 9 mos but not after 18 mos, with severe hybrid layer degradation. The results led to rejection of the null hypothesis and highlight the concept of biomimetic water replacement from the collagen intrafibrillar compartments as the ultimate goal in extending the longevity of resin-dentin bonds.
Resumo:
Objectives: This study tested the following null hypotheses: (1) there is no difference in resin-dentine bond strength when an experimental glutaraldehyde primer solution is added prior to bonding procedures and (2) there is no difference in resin-dentine bond strength when experimental glutaraldehyde/adhesive system is applied under dry or wet demineralized dentine conditions. Methods: Extracted human maxillary third molars were selected. Flat, mid-coronal dentine was exposed for bonding and four groups were formed. Two groups were designated for the dry and two for the wet dentine technique: DRY: (1) Group GD: acid etching + glutaraldehyde primer (primer A) + HEMA/ethanol primer (primer B)-under dried dentine + unfilled resin; (2) Group D: the same as GD, except for primer A application; WET: (3) Group GW: the same as GD, but primer B was applied under wet dentine condition; (4) Group W: the same as GW, except for primer A application. The bonding resin was light-cured and a resin core was built up on the adhesive layer. Teeth were then prepared for microtensile bond testing to evaluate bond strength. The data obtained were submitted to ANOVA and Tukey`s test (alpha = 0.05). Results: Glutaraldehyde primer application significantly improved resin-dentine bond strength. No significant difference was observed when the same experimental adhesive system was applied under either dry or wet dentine conditions. These results allow the first null hypothesis to be rejected and the second to be accepted. Conclusion: Glutaraldehyde may affect demineralized dentine properties leading to improved resin bonding to wet and dry substrates. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Purpose: To evaluate the effect of oxalate during total-etch bonding, under different dentin moisture conditions, over time. The null hypothesis tested was that microtensile bond strength (mu TBS) was not affected by oxalate treatment and dentin moisture during two evaluation periods. Methods: Extracted human third molars had their mid-coronal dentin exposed flat and polished with 600-grit SiC paper. The surfaces were etched with 35% phosphoric acid for 15 seconds, washed and blot dried. After etching, a 3% potassium oxalate gel was applied for 120 seconds, except for the control group (no desensitizer). The surface was then washed and left moist (Wet bonding) or air-dried for 30 seconds (Dry bonding). The surfaces were bonded with: (I) two 2-step etch-and-rinse adhesives: Single Bond (SB); Prime & Bond NT (PBNT) and (2) one 3-step etch-and-rinse adhesive: Scotchbond Multi Purpose (SBMP). Composite buildups were constructed incrementally with Tetric Ceram resin composite. Each increment was cured for 40 seconds. After storage in water for 24 hours or 1 year at 37 C, the specimens were prepared for mu TBS testing with a cross-sectional area of approximately 1 mm(2). They were then tested in tension in an Instron machine at 0.5 mm/minute. Data were analyzed by ANOVA and Student-Newman-Keuls at alpha = 0.05. Results: Application of potassium oxalate had no significant effect on the bond strengths of SBMP and PBNT, regardless of the surface moisture condition (P > 0.05). Conversely, reduced bond strengths were observed after oxalate treatment for SB in both moisture conditions, that being significantly lower when using a dry-bonding procedure (P < 0.05). Lower bond strength was obtained for PBNT when a dry-bonding technique was used, regardless of the oxalate treatment (P < 0.05). After aging the specimens for 1 year, bond strengths decreased. Smaller reductions were observed for SBMP, regardless of moisture conditions. For the WB technique, smaller reductions after 1 year were observed without oxalate treatment for SB and after oxalate treatment for PBNT. (Am J Dent 2010;23:137-141).
Resumo:
Objectives. To test the null hypothesis that continuity of resin cement/dentin interfaces is not affected by location along the root canal walls or water storage for 3 months when bonding fiber posts into root canals. Methods. Fiber posts were luted to bovine incisors using four resinous luting systems: Multilink, Variolink II, Enforce Dual and Enforce PV. After cementation, roots were longitudinally sectioned and epoxy resin replicas were prepared for SEM analysis (baseline). The original halves were immersed in solvent, replicated and evaluated. After 3 months water storage and a second solvent immersion, a new set of replicas were made and analyzed. The ratio (%) between the length (mm) of available bonding interface and the actual extension of bonded cement/dentin interface was calculated. Results. Significant lower percent values of bond integrity were found for Multilink (8.25%) and Variolink 11 (10.08%) when compared to Enforce Dual (25.11%) and Enforce PV (27.0%) at baseline analysis. The same trend was observed after immersion in solvent, with no significant changes. However, bond integrity was significantly reduced after 3 months water storage and a second solvent immersion to values below 5% (Multilink = 3.31%, Variolink=1.87%, Enforce Dual=1.20%, and Enforce PV=0.75%). The majority of gaps were depicted at the apical and middle thirds at baseline and after immersion in solvent. After 3 months, gaps were also detected at the cervical third. Significance. Bond integrity at the cement/dentin interface was surprisingly low after cementation of fiber posts to root canals with all resin cements. That was not significantly altered after immersion in solvent, but was further compromised after 3 months water storage. Gaps were mainly seen at middle and apical thirds throughout the experiment and extended to the cervical third after water storage for 3 months. Bond integrity of fiber posts luted to root canals was affected both by location and water storage. (C) 2007 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.