911 resultados para Null Hypothesis


Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is of interest in some applications to determine whether there is a relationship between a hazard rate function (or a cumulative incidence function) and a mark variable which is only observed at uncensored failure times. We develop nonparametric tests for this problem when the mark variable is continuous. Tests are developed for the null hypothesis that the mark-specific hazard rate is independent of the mark versus ordered and two-sided alternatives expressed in terms of mark-specific hazard functions and mark-specific cumulative incidence functions. The test statistics are based on functionals of a bivariate test process equal to a weighted average of differences between a Nelson--Aalen-type estimator of the mark-specific cumulative hazard function and a nonparametric estimator of this function under the null hypothesis. The weight function in the test process can be chosen so that the test statistics are asymptotically distribution-free.Asymptotically correct critical values are obtained through a simple simulation procedure. The testing procedures are shown to perform well in numerical studies, and are illustrated with an AIDS clinical trial example. Specifically, the tests are used to assess if the instantaneous or absolute risk of treatment failure depends on the amount of accumulation of drug resistance mutations in a subject's HIV virus. This assessment helps guide development of anti-HIV therapies that surmount the problem of drug resistance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We introduce a diagnostic test for the mixing distribution in a generalised linear mixed model. The test is based on the difference between the marginal maximum likelihood and conditional maximum likelihood estimates of a subset of the fixed effects in the model. We derive the asymptotic variance of this difference, and propose a test statistic that has a limiting chi-square distribution under the null hypothesis that the mixing distribution is correctly specified. For the important special case of the logistic regression model with random intercepts, we evaluate via simulation the power of the test in finite samples under several alternative distributional forms for the mixing distribution. We illustrate the method by applying it to data from a clinical trial investigating the effects of hormonal contraceptives in women.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Equivalence testing is growing in use in scientific research outside of its traditional role in the drug approval process. Largely due to its ease of use and recommendation from the United States Food and Drug Administration guidance, the most common statistical method for testing (bio)equivalence is the two one-sided tests procedure (TOST). Like classical point-null hypothesis testing, TOST is subject to multiplicity concerns as more comparisons are made. In this manuscript, a condition that bounds the family-wise error rate (FWER) using TOST is given. This condition then leads to a simple solution for controlling the FWER. Specifically, we demonstrate that if all pairwise comparisons of k independent groups are being evaluated for equivalence, then simply scaling the nominal Type I error rate down by (k - 1) is sufficient to maintain the family-wise error rate at the desired value or less. The resulting rule is much less conservative than the equally simple Bonferroni correction. An example of equivalence testing in a non drug-development setting is given.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We previously showed that lifetime cumulative lead dose, measured as lead concentration in the tibia bone by X-ray fluorescence, was associated with persistent and progressive declines in cognitive function and with decreases in MRI-based brain volumes in former lead workers. Moreover, larger region-specific brain volumes were associated with better cognitive function. These findings motivated us to explore a novel application of path analysis to evaluate effect mediation. Voxel-wise path analysis, at face value, represents the natural evolution of voxel-based morphometry methods to answer questions of mediation. Application of these methods to the former lead worker data demonstrated potential limitations in this approach where there was a tendency for results to be strongly biased towards the null hypothesis (lack of mediation). Moreover, a complimentary analysis using anatomically-derived regions of interest volumes yielded opposing results, suggesting evidence of mediation. Specifically, in the ROI-based approach, there was evidence that the association of tibia lead with function in three cognitive domains was mediated through the volumes of total brain, frontal gray matter, and/or possibly cingulate. A simulation study was conducted to investigate whether the voxel-wise results arose from an absence of localized mediation, or more subtle defects in the methodology. The simulation results showed the same null bias evidenced as seen in the lead workers data. Both the lead worker data results and the simulation study suggest that a null-bias in voxel-wise path analysis limits its inferential utility for producing confirmatory results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The penetration, translocation, and distribution of ultrafine and nanoparticles in tissues and cells are challenging issues in aerosol research. This article describes a set of novel quantitative microscopic methods for evaluating particle distributions within sectional images of tissues and cells by addressing the following questions: (1) is the observed distribution of particles between spatial compartments random? (2) Which compartments are preferentially targeted by particles? and (3) Does the observed particle distribution shift between different experimental groups? Each of these questions can be addressed by testing an appropriate null hypothesis. The methods all require observed particle distributions to be estimated by counting the number of particles associated with each defined compartment. For studying preferential labeling of compartments, the size of each of the compartments must also be estimated by counting the number of points of a randomly superimposed test grid that hit the different compartments. The latter provides information about the particle distribution that would be expected if the particles were randomly distributed, that is, the expected number of particles. From these data, we can calculate a relative deposition index (RDI) by dividing the observed number of particles by the expected number of particles. The RDI indicates whether the observed number of particles corresponds to that predicted solely by compartment size (for which RDI = 1). Within one group, the observed and expected particle distributions are compared by chi-squared analysis. The total chi-squared value indicates whether an observed distribution is random. If not, the partial chi-squared values help to identify those compartments that are preferential targets of the particles (RDI > 1). Particle distributions between different groups can be compared in a similar way by contingency table analysis. We first describe the preconditions and the way to implement these methods, then provide three worked examples, and finally discuss the advantages, pitfalls, and limitations of this method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Whether the subgingival microbiota differ between individuals with chronic and those with aggressive periodontitis, and whether smoking influences bacterial composition, is controversial. We hypothesized that the subgingival microbiota do not differ between sites in individuals with chronic or aggressive periodontitis, or by smoking status. Bacterial counts and proportional distributions were assessed in 84 individuals with chronic periodontitis and 22 with aggressive periodontitis. No differences in probing pocket depth by periodontal status were found (mean, 0.11 mm; 95% CI, 0.6 to 0.8, p = 0.74). Including Staphylococcus aureus, Parvimonas micra, and Prevotella intermedia, 7/40 species were found at higher levels in those with aggressive periodontitis (p < 0.001). Smokers had higher counts of Tannerella forsythia (p < 0.01). The prevalence of S. aureus in non-smokers with aggressive periodontitis was 60.5%. The null hypothesis was rejected, in that P. intermedia, S. aureus, and S. mutans were robust in diagnosing sites in individuals with aggressive periodontitis. S. aureus, S. sanguinis, and T. forsythia differentiated smoking status.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: To test the null hypothesis that there is no difference between premolar position visualized on panoramic radiographs (PRs) and lateral headfilms (LHs). MATERIALS AND METHODS: The prevalence of differences in the direction of crown angulation between PR and LH was assessed. Furthermore, brass wire markers with different sagittal and transverse angulations were placed in a dry skull. With the markers in place, LHs and PRs were taken. RESULTS: A difference in the direction of crown angulation of unerupted second premolars between PR and LH occurred in 19.5% of patients. The reason for the angulation differences is a buccolingual orientation of the tooth, which appears as a mesiodistal angulation on the PR. CONCLUSION: The null hypothesis was rejected since in one-fifth of the patients premolar projection differs between the panoramic radiograph and the lateral headfilm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The longboard skateboard has a longer, and usually wider, deck than the standard skateboard to provide greater support for the rider during the higher speeds attained on this version of the skateboard. Fourteen volunteer subjects participated in downhill and uphill longboarding trials. Heart rates were monitored during both trials, and the downhill and uphill average heart rates were compared with resting heart rates and then compared with accepted intensity recommendations for health and fitness benefits. The study questions were: Does longboarding have an acute effect on heart rates? If so, will longboarding uphill and/or downhill cause heart rate changes to levels recommended to improve cardiorespiratory health and fitness? With these questions as guidance we developed four hypotheses. With beats/minute and average uphill heart rate of 167.8 beats/minute statistical analysis showed statistically significant p values < .0001 and each null hypothesis was rejected in favor of their respective research hypotheses. Based on average age and average resting heart rate, average age-predicted maximum heart rate was 193.2 beats/minute and heart rate reserve was 133.2 beats/minute. The average percentages of heart rate reserve for the downhill section (131.4 beats/minute) and uphill section )(167.8 beats/minute) were 54% and 81% respectively. Downhill heart rates are within moderate intensity levels, 40% to 60% of heart rate reserve, and uphill heart rates are within vigorous intensity levels, greater than 60% of heart rate reserve. These results indicate that longboarding can increase heart rate to suggested levels suggested by the American College of Sports Medicine for improving cardiovascular health and fitness.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE To compare the precision of fit of full-arch implant-supported screw-retained computer-aided designed and computer-aided manufactured (CAD/CAM) titanium-fixed dental prostheses (FDP) before and after veneering. The null-hypothesis was that there is no difference in vertical microgap values between pure titanium frameworks and FDPs after porcelain firing. MATERIALS AND METHODS Five CAD/CAM titanium grade IV frameworks for a screw-retained 10-unit implant-supported reconstruction on six implants (FDI tooth positions 15, 13, 11, 21, 23, 25) were fabricated after digitizing the implant platforms and the cuspid-supporting framework resin pattern with a laser scanner (CARES(®) Scan CS2; Institut Straumann AG, Basel, Switzerland). A bonder, an opaquer, three layers of porcelain, and one layer of glaze were applied (Vita Titankeramik) and fired according to the manufacturer's preheating and fire cycle instructions at 400-800°C. The one-screw test (implant 25 screw-retained) was applied before and after veneering of the FDPs to assess the vertical microgap between implant and framework platform with a scanning electron microscope. The mean microgap was calculated from interproximal and buccal values. Statistical comparison was performed with non-parametric tests. RESULTS All vertical microgaps were clinically acceptable with values <90 μm. No statistically significant pairwise difference (P = 0.98) was observed between the relative effects of vertical microgap of unveneered (median 19 μm; 95% CI 13-35 μm) and veneered FDPs (20 μm; 13-31 μm), providing support for the null-hypothesis. Analysis within the groups showed significantly different values between the five implants of the FDPs before (P = 0.044) and after veneering (P = 0.020), while a monotonous trend of increasing values from implant 23 (closest position to screw-retained implant 25) to 15 (most distant implant) could not be observed (P = 0.169, P = 0.270). CONCLUSIONS Full-arch CAD/CAM titanium screw-retained frameworks have a high accuracy. Porcelain firing procedure had no impact on the precision of fit of the final FDPs. All implant microgap measurements of each FDP showed clinically acceptable vertical misfit values before and after veneering. Thus, the results do not only show accurate performance of the milling and firing but show also a reproducible scanning and designing process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Recently, Cipriani and colleagues examined the relative efficacy of 12 new-generation antidepressants on major depression using network meta-analytic methods. They found that some of these medications outperformed others in patient response to treatment. However, several methodological criticisms have been raised about network meta-analysis and Cipriani’s analysis in particular which creates the concern that the stated superiority of some antidepressants relative to others may be unwarranted. Materials and Methods: A Monte Carlo simulation was conducted which involved replicating Cipriani’s network metaanalysis under the null hypothesis (i.e., no true differences between antidepressants). The following simulation strategy was implemented: (1) 1000 simulations were generated under the null hypothesis (i.e., under the assumption that there were no differences among the 12 antidepressants), (2) each of the 1000 simulations were network meta-analyzed, and (3) the total number of false positive results from the network meta-analyses were calculated. Findings: Greater than 7 times out of 10, the network meta-analysis resulted in one or more comparisons that indicated the superiority of at least one antidepressant when no such true differences among them existed. Interpretation: Based on our simulation study, the results indicated that under identical conditions to those of the 117 RCTs with 236 treatment arms contained in Cipriani et al.’s meta-analysis, one or more false claims about the relative efficacy of antidepressants will be made over 70% of the time. As others have shown as well, there is little evidence in these trials that any antidepressant is more effective than another. The tendency of network meta-analyses to generate false positive results should be considered when conducting multiple comparison analyses.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Acoustic backscatter contrast in depositional sediments under salmon farm cages in the Bay of Fundy, Canada, was correlated with localized changes in (unknown) sediment geotechnical properties, as indicated by 4 independent measures of organic enrichment. Sediment total sulfides and redox potentials, enzyme hydrolyzable amino acids, sediment profile imaging and macrofaunal samples, taken at mid-cage positions, each rejected the null hypothesis that salmon cage footprints, defined acoustically as high backscatter areas, were indistinguishable from nearby reference areas. Acoustic backscatter imaging appears capable of mapping organic enrichment in depositional sediments caused by excessive inputs of salmon farm wastes associated with intensive aquaculture.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE The results of Interventional Management of Stroke (IMS) III, Magnetic Resonance and REcanalization of Stroke Clots Using Embolectomy (MR RESCUE), and SYNTHESIS EXPANSION trials are expected to affect the practice of endovascular treatment for acute ischemic stroke. The purpose of this report is to review the components of the designs and methods of these trials and to describe the influence of those components on the interpretation of trial results. METHODS A critical review of trial design and conduct of IMS III, MR RESCUE, and SYNTHESIS EXPANSION is performed with emphasis on patient selection, shortcomings in procedural aspects, and methodology of data ascertainment and analysis. The influence of each component is estimated based on published literature including multicenter clinical trials reporting on endovascular treatment for acute ischemic stroke and myocardial infarction. RESULTS We critically examined the time interval between symptom onset and treatment and rates of angiographic recanalization to differentiate between "endovascular treatment" and "parameter optimized endovascular treatment" as it relates to the IMS III, MR RESCUE, and SYNTHESIS EXPANSION trials. All the three trials failed to effectively test "parameter optimized endovascular treatment" due to the delay between symptom onset and treatment and less than optimal rates of recanalization. In all the three trials, the magnitude of benefit with endovascular treatment required to reject the null hypothesis was larger than could be expected based on previous studies. The IMS III and SYNTHESIS EXPANSION trials demonstrated that rates of symptomatic intracerebral hemorrhages subsequent to treatment are similar between IV thrombolytics and endovascular treatment in matched acute ischemic stroke patients. The trials also indirectly validated the superiority/equivalence of IV thrombolytics (compared with endovascular treatment) in patients with minor neurological deficits and those without large vessel occlusion on computed tomographic/magnetic resonance angiography. CONCLUSIONS The results do not support a large magnitude benefit of endovascular treatment in subjects randomized in all the three trials. The possibility that benefits of a smaller magnitude exist in certain patient populations cannot be excluded. Large magnitude benefits can be expected with implementation of "parameter optimized endovascular treatment" in patients with ischemic stroke who are candidates for IV thrombolytics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE To assess the in vivo amount of BPA released from a visible light-cured orthodontic adhesive, immediately after bracket bonding. METHODS 20 orthodontic patients were recruited after obtaining informed consent. All patients received 24 orthodontic brackets in both dental arches. In Group A (11 patients), 25 ml of tap water were used for mouth rinsing, whereas in Group B (9 patients) a simulated mouth rinse formulation was used: a mixture of 20 ml de-ionized water plus 5 ml absolute ethanol. Rinsing solutions were collected before, immediately after placing the orthodontic appliances and after washing out the oral cavity and were then stored in glass tubes. Rinsing was performed in a single phase for 60s with the entire volume of each liquid. The BPA analysis was performed by gas chromatography-mass spectrometry. RESULTS An increase in BPA concentration immediately after the 1st post-bonding rinse was observed, for both rinsing media, which was reduced after the 2nd post-bonding rinse. Water exhibited higher levels of BPA concentration than water/ethanol after 1st and 2nd post-bonding rinses. Two-way mixed Repeated Measures ANOVA showed that the primary null hypothesis declaring mean BPA concentration to be equal across rinsing medium and rinsing status was rejected (p-value <0.001). The main effects of the rinsing medium and status, as well as their interaction were found to be statistically significant (p-values 0.048, <0.001 and 0.011 respectively). SIGNIFICANCE A significant pattern of increase of BPA concentration, followed by a decrease that reached the initial values was observed. The amount of BPA was relatively low and far below the reference limits of tolerable daily intake.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

INTRODUCTION This paper focuses exclusively on experimental models with ultra high dilutions (i.e. beyond 10(-23)) that have been submitted to replication scrutiny. It updates previous surveys, considers suggestions made by the research community and compares the state of replication in 1994 with that in 2015. METHODS Following literature research, biochemical, immunological, botanical, cell biological and zoological studies on ultra high dilutions (potencies) were included. Reports were grouped into initial studies, laboratory-internal, multicentre and external replications. Repetition could yield either comparable, or zero, or opposite results. The null-hypothesis was that test and control groups would not be distinguishable (zero effect). RESULTS A total of 126 studies were found. From these, 28 were initial studies. When all 98 replicative studies were considered, 70.4% (i.e. 69) reported a result comparable to that of the initial study, 20.4% (20) zero effect and 9.2% (9) an opposite result. Both for the studies until 1994 and the studies 1995-2015 the null-hypothesis (dominance of zero results) should be rejected. Furthermore, the odds of finding a comparable result are generally higher than of finding an opposite result. Although this is true for all three types of replication studies, the fraction of comparable studies diminishes from laboratory-internal (total 82.9%) to multicentre (total 75%) to external (total 48.3%), while the fraction of opposite results was 4.9%, 10.7% and 13.8%. Furthermore, it became obvious that the probability of an external replication producing comparable results is bigger for models that had already been further scrutinized by the initial researchers. CONCLUSIONS We found 28 experimental models which underwent replication. In total, 24 models were replicated with comparable results, 12 models with zero effect, and 6 models with opposite results. Five models were externally reproduced with comparable results. We encourage further replications of studies in order to learn more about the model systems used.