974 resultados para variance-ratio tests


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique which is commonly used to quantify changes in blood oxygenation and flow coupled to neuronal activation. One of the primary goals of fMRI studies is to identify localized brain regions where neuronal activation levels vary between groups. Single voxel t-tests have been commonly used to determine whether activation related to the protocol differs across groups. Due to the generally limited number of subjects within each study, accurate estimation of variance at each voxel is difficult. Thus, combining information across voxels in the statistical analysis of fMRI data is desirable in order to improve efficiency. Here we construct a hierarchical model and apply an Empirical Bayes framework on the analysis of group fMRI data, employing techniques used in high throughput genomic studies. The key idea is to shrink residual variances by combining information across voxels, and subsequently to construct an improved test statistic in lieu of the classical t-statistic. This hierarchical model results in a shrinkage of voxel-wise residual sample variances towards a common value. The shrunken estimator for voxelspecific variance components on the group analyses outperforms the classical residual error estimator in terms of mean squared error. Moreover, the shrunken test-statistic decreases false positive rate when testing differences in brain contrast maps across a wide range of simulation studies. This methodology was also applied to experimental data regarding a cognitive activation task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many preanalytical variables affect the results of coagulation assays. A possible way to control some of them would be to accept blood specimens shipped in the original collection tube. The aim of our study was to investigate the stability of coagulation assays in citrated whole blood transported at ambient temperature for up to two days after specimen collection. Blood samples from 59 patients who attended our haematology outpatient ward for thrombophilia screening were transported at ambient temperature (outdoor during the day, indoor overnight) for following periods of time: <1 hour, 4-6, 8-12, 24-28 and 48-52 hours prior to centrifugation and plasma-freezing. The following coagulation tests were performed: PT, aPTT, fibrinogen, FII:C, FV:C, FVII:C, FVIII:C, FIX:C, FX:C, FXI:C, VWF:RCo, VWF:Ag, AT, PC activity, total and free PS antigen, modified APC-sensitivity-ratio, thrombin-antithrombin-complex and D-dimer. Clinically significant changes, defined as a percentage change of more than 10% from the initial value, were observed for FV:C, FVIII:C and total PS antigen starting at 24-28 hours, and for PT, aPTT and FVII:C at 48-52 hours. No statistically significant differences were seen for fibrinogen, antithrombin, or thrombin-antithrombin complexes (Friedman repeated measures analysis of variance). The present data suggest that the use of whole blood samples transported at ambient temperature may be an acceptable means of delivering specimens for coagulation analysis. With the exception of factor V and VIII coagulant activity, and total PS antigen all investigated parameters can be measured 24-28 hours after specimen collection without observing clinically relevant changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND While the assessment of analytical precision within medical laboratories has received much attention in scientific enquiry, the degree of as well as the sources causing variation between them remains incompletely understood. In this study, we quantified the variance components when performing coagulation tests with identical analytical platforms in different laboratories and computed intraclass correlations coefficients (ICC) for each coagulation test. METHODS Data from eight laboratories measuring fibrinogen twice in twenty healthy subjects with one out of 3 different platforms and single measurements of prothrombin time (PT), and coagulation factors II, V, VII, VIII, IX, X, XI and XIII were analysed. By platform, the variance components of (i) the subjects, (ii) the laboratory and the technician and (iii) the total variance were obtained for fibrinogen as well as (i) and (iii) for the remaining factors using ANOVA. RESULTS The variability for fibrinogen measurements within a laboratory ranged from 0.02 to 0.04, the variability between laboratories ranged from 0.006 to 0.097. The ICC for fibrinogen ranged from 0.37 to 0.66 and from 0.19 to 0.80 for PT between the platforms. For the remaining factors the ICC's ranged from 0.04 (FII) to 0.93 (FVIII). CONCLUSIONS Variance components that could be attributed to technicians or laboratory procedures were substantial, led to disappointingly low intraclass correlation coefficients for several factors and were pronounced for some of the platforms. Our findings call for sustained efforts to raise the level of standardization of structures and procedures involved in the quantification of coagulation factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

I introduce the new mgof command to compute distributional tests for discrete (categorical, multinomial) variables. The command supports largesample tests for complex survey designs and exact tests for small samples as well as classic large-sample x2-approximation tests based on Pearson’s X2, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read, 1984, Journal of the Royal Statistical Society, Series B (Methodological) 46: 440–464). The complex survey correction is based on the approach by Rao and Scott (1981, Journal of the American Statistical Association 76: 221–230) and parallels the survey design correction used for independence tests in svy: tabulate. mgof computes the exact tests by using Monte Carlo methods or exhaustive enumeration. mgof also provides an exact one-sample Kolmogorov–Smirnov test for discrete data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the mean-reverting property of real exchange rates. Earlier studies have generally not been able to reject the null hypothesis of a unit-root in real exchange rates, especially for the post-Bretton Woods floating period. The results imply that long-run purchasing power parity does not hold. More recent studies, especially those using panel unit-root tests, have found more favorable results, however. But, Karlsson and Löthgren (2000) and others have recently pointed out several potential pitfalls of panel unit-root tests. Thus, the panel unit-root test results are suggestive, but they are far from conclusive. Moreover, consistent individual country time series evidence that supports long-run purchasing power parity continues to be scarce. In this paper, we test for long memory using Lo's (1991) modified rescaled range test, and the rescaled variance test of Giraitis, Kokoszka, Leipus, and Teyssière (2003). Our testing procedure provides a non-parametric alternative to the parametric tests commonly used in this literature. Our data set consists of monthly observations from April 1973 to April 2001 of the G-7 countries in the OECD. Our two tests find conflicting results when we use U.S. dollar real exchange rates. However, when non-U.S. dollar real exchange rates are used, we find only two cases out of fifteen where the null hypothesis of an unit-root with short-term dependence can be rejected in favor of the alternative hypothesis of long-term dependence using the modified rescaled range test, and only one case when using the rescaled variance test. Our results therefore provide a contrast to the recent favorable panel unit-root test results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo simulation has been conducted to investigate parameter estimation and hypothesis testing in some well known adaptive randomization procedures. The four urn models studied are Randomized Play-the-Winner (RPW), Randomized Pôlya Urn (RPU), Birth and Death Urn with Immigration (BDUI), and Drop-the-Loses Urn (DL). Two sequential estimation methods, the sequential maximum likelihood estimation (SMLE) and the doubly adaptive biased coin design (DABC), are simulated at three optimal allocation targets that minimize the expected number of failures under the assumption of constant variance of simple difference (RSIHR), relative risk (ORR), and odds ratio (OOR) respectively. Log likelihood ratio test and three Wald-type tests (simple difference, log of relative risk, log of odds ratio) are compared in different adaptive procedures. ^ Simulation results indicates that although RPW is slightly better in assigning more patients to the superior treatment, the DL method is considerably less variable and the test statistics have better normality. When compared with SMLE, DABC has slightly higher overall response rate with lower variance, but has larger bias and variance in parameter estimation. Additionally, the test statistics in SMLE have better normality and lower type I error rate, and the power of hypothesis testing is more comparable with the equal randomization. Usually, RSIHR has the highest power among the 3 optimal allocation ratios. However, the ORR allocation has better power and lower type I error rate when the log of relative risk is the test statistics. The number of expected failures in ORR is smaller than RSIHR. It is also shown that the simple difference of response rates has the worst normality among all 4 test statistics. The power of hypothesis test is always inflated when simple difference is used. On the other hand, the normality of the log likelihood ratio test statistics is robust against the change of adaptive randomization procedures. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Genome-Wide Association Study analytical (GWAS) methods were applied in a large biracial sample of individuals to investigate variation across the genome for its association with a surrogate low-density lipoprotein (LDL) particle size phenotype, the ratio of LDL-cholesterol level over ApoB level. Genotyping was performed on the Affymetrix 6.0 GeneChip with approximately one million single nucleotide polymorphisms (SNPs). The ratio of LDL cholesterol to ApoB was calculated, and association tests used multivariable linear regression analysis with an additive genetic model after adjustment for the covariates sex, age and BMI. Association tests were performed separately in African Americans and Caucasians. There were 9,562 qualified individuals in the Caucasian group and 3,015 qualified individuals in the African American group. Overall, in Caucasians two statistically significant loci were identified as being associated with the ratio of LDL-cholesterol over ApoB: rs10488699 (p<5 x10-8, 11q23.3 near BUD13) and the SNP rs964184 (p<5 x10-8 11q23.3 near ZNF259). We also found rs12286037 ((p<4x10-7) (11q23.3) near APOA5/A4/C3/A1 with suggestive associate in the Caucasian sample. In exploratory analyses, a difference in the pattern of association between individuals taking and not taking LDL-cholesterol lowering medications was observed. Individuals who were not taking medications had smaller p-value than those taking medication. In the African-American group, there were no significant (p<5x10-8) or suggestive associations (p<4x10-7) with the ratio of LDL-cholesterol over ApoB after adjusting for age, BMI, and sex and comparing individuals with and without LDL-cholesterol lowering medication. Conclusions: There were significant and suggestive associations between SNP genotype and the ratio of LDL-cholesterol to ApoB in Caucasians, but these associations may be modified by medication treatment.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the biomedical studies, the general data structures have been the matched (paired) and unmatched designs. Recently, many researchers are interested in Meta-Analysis to obtain a better understanding from several clinical data of a medical treatment. The hybrid design, which is combined two data structures, may create the fundamental question for statistical methods and the challenges for statistical inferences. The applied methods are depending on the underlying distribution. If the outcomes are normally distributed, we would use the classic paired and two independent sample T-tests on the matched and unmatched cases. If not, we can apply Wilcoxon signed rank and rank sum test on each case. ^ To assess an overall treatment effect on a hybrid design, we can apply the inverse variance weight method used in Meta-Analysis. On the nonparametric case, we can use a test statistic which is combined on two Wilcoxon test statistics. However, these two test statistics are not in same scale. We propose the Hybrid Test Statistic based on the Hodges-Lehmann estimates of the treatment effects, which are medians in the same scale.^ To compare the proposed method, we use the classic meta-analysis T-test statistic on the combined the estimates of the treatment effects from two T-test statistics. Theoretically, the efficiency of two unbiased estimators of a parameter is the ratio of their variances. With the concept of Asymptotic Relative Efficiency (ARE) developed by Pitman, we show ARE of the hybrid test statistic relative to classic meta-analysis T-test statistic using the Hodges-Lemann estimators associated with two test statistics.^ From several simulation studies, we calculate the empirical type I error rate and power of the test statistics. The proposed statistic would provide effective tool to evaluate and understand the treatment effect in various public health studies as well as clinical trials.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study deals with the mineralogical variability of siliceous and zeolitic sediments, porcellanites, and cherts at small intervals in the continuously cored sequence of Deep Sea Drilling Project Site 462. Skeletal opal is preserved down to a maximum burial depth of 390 meters (middle Eocene). Below this level, the tests are totally dissolved or replaced and filled by opal-CT, quartz, clinoptilolite, and calcite. Etching of opaline tests does not increase continously with deeper burial. Opal solution accompanied by a conspicuous formation of authigenic clinoptilolite has a local maximum in Core 16 (150 m). A causal relationship with the lower Miocene hiatus at this level is highly probable. Oligocene to Cenomanian sediments represent an intermediate stage of silica diagenesis: the opal-CT/quartz ratios of the silicified rocks are frequently greater than 1, and quartz filling pores or replacing foraminifer tests is more widespread than quartz which converted from an opal-CT precursor. As at other sites, there is a marked discontinuity of the transitions from biogenic opal via opal-CT to quartz with increasing depth of burial. Layers with unaltered opal-A alternate with porcellanite beds; the intensity of the opal-CT-to-quartz transformation changes very rapidly from horizon to horizon and obviously is not correlated with lithologic parameters. The silica for authigenic clinoptilolite was derived from biogenic opal and decaying volcanic components.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper concentrates on the Early Oligocene palaeoclimate of the southern part of Eastern and Central Europe and gives a detailed climatological analysis, combined with leaf-morphological studies and modelling of the palaeoatmospheric CO2 level using stomatal and d13 C data. Climate data are calculated using the Coexistence Approach for Kiscellian floras of the Palaeogene Basin (Hungary and Slovenia) and coeval assemblages from Central and Southeastern Europe. Potential microclimatic or habitat variations are considered using morphometric analysis of fossil leaves from Hungarian, Slovenian and Italian floras. Reconstruction of CO2 is performed by applying a recently introduced mechanistic model. Results of climate analysis indicate distinct latitudinal and longitudinal climate patterns for various variables which agree well with reconstructed palaeogeography and vegetation. Calculated climate variables in general suggest a warm and frost-free climate with low seasonal variation of temperature. A difference in temperature parameters is recorded between localities from Central and Southeastern Europe, manifested mainly in the mean temperature of the coldest month. Results of morphometric analysis suggest microclimatic or habitat difference among studied floras. Extending the scarce information available on atmospheric CO2 levels during the Oligocene, we provide data for a well-defined time-interval. Reconstructed atmospheric CO2 levels agree well with threshold values for Antarctic ice sheet growth suggested by recent modelling studies. The successful application of the mechanistic model for the reconstruction of atmospheric CO2 levels raises new possibitities for future climate inference from macro-flora studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past decade, the ratio of Mg to Ca in foraminiferal tests has emerged as a valuable paleotemperature proxy. However, large uncertainties remain in the relationships between benthic foraminiferal Mg/Ca and temperature. Mg/Ca was measured in benthic foraminifera from 31 high-quality multicore tops collected in the Florida Straits, spanning a temperature range of 5.8° to 18.6°C. New calibrations are presented for Uvigerina peregrina, Planulina ariminensis, Planulina foveolata, and Hoeglundina elegans. The Mg/Ca values and temperature sensitivities vary among species, but all species exhibit a positive correlation that decreases in slope at higher temperatures. The decrease in the sensitivity of Mg/Ca to temperature may potentially be explained by Mg/Ca suppression at high carbonate ion concentrations. It is suggested that a carbonate ion influence on Mg/Ca may be adjusted for by dividing Mg/Ca by Li/Ca. The Mg/Li ratio displays stronger correlations to temperature, with up to 90% of variance explained, than Mg/Ca alone. These new calibrations are tested on several Last Glacial Maximum (LGM) samples from the Florida Straits. LGM temperatures reconstructed from Mg/Ca and Mg/Li are generally more scattered than core top measurements and may be contaminated by high-Mg overgrowths. The potential for Mg/Ca and Mg/Li as temperature proxies warrants further testing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sediment sequence at Ocean Drilling Program (ODP) Site 910 (556 m water depth) on the Yermak Plateau in the Arctic Ocean features a remarkable "overconsolidated section" from ~19 to 70-95 m below sea floor (m bsf), marked by large increases in bulk density and sediment strength. The ODP Leg 151 Shipboard Scientific Party interpreted the overconsolidated section to be caused by (1) grounding of a marine-based ice sheet, derived from Svalbard and perhaps the Barents Sea ice sheet, and/or (2) coarser-grained glacial sedimentation, which allowed increased compaction. Here I present planktonic foraminiferal d18O data based on Neogloboquadrina pachyderma (sinistrally coiling) that date the termination of overconsolidation near the boundary between isotope stages 16 and 17 (ca. 660 ka). No evidence is found for coarser grained sedimentation, because lithic fragments >150 µm exhibit similar mean concentrations throughout the upper 24.5 m bsf. The overconsolidated section may reflect more extensive ice-sheet grounding prior to ca. 660 ka, suggesting a major change in state of the Svalbard ice sheets during the mid-Quaternary. Furthermore, continuous sedimentation since that time argues against a pervasive Arctic ice shelf impinged on the Yermak Plateau during the past 660 k.y. These findings suggest that Svalbard ice-sheet history was largely independent of circum-Arctic ice-sheet history during the middle to late Quaternary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser ablation inductively coupled plasma-mass spectrometry microanalysis of fossil and live Globigerinoides ruber from the eastern Indian Ocean reveals large variations of Mg/Ca composition both within and between individual tests from core top or plankton pump samples. Although the extent of intertest and intratest compositional variability exceeds that attributable to calcification temperature, the pooled mean Mg/Ca molar values obtained for core top samples between the equator and >30°S form a strong exponential correlation with mean annual sea surface temperature (Mg/Ca mmol/mol = 0.52 exp**0.076SST°C, r**2 = 0.99). The intertest Mg/Ca variability within these deep-sea core top samples is a source of significant uncertainty in Mg/Ca seawater temperature estimates and is notable for being site specific. Our results indicate that widely assumed uncertainties in Mg/Ca thermometry may be underestimated. We show that statistical power analysis can be used to evaluate the number of tests needed to achieve a target level of uncertainty on a sample by sample case. A varying bias also arises from the presence and varying mix of two morphotypes (G. ruber ruber and G. ruber pyramidalis), which have different mean Mg/Ca values. Estimated calcification temperature differences between these morphotypes range up to 5°C and are notable for correlating with the seasonal range in seawater temperature at different sites.