998 resultados para 230204 Applied Statistics
Resumo:
Genome-wide association studies (GWASs) have identified many genetic variants underlying complex traits. Many detected genetic loci harbor variants that associate with multiple-even distinct-traits. Most current analysis approaches focus on single traits, even though the final results from multiple traits are evaluated together. Such approaches miss the opportunity to systemically integrate the phenome-wide data available for genetic association analysis. In this study, we propose a general approach that can integrate association evidence from summary statistics of multiple traits, either correlated, independent, continuous, or binary traits, which might come from the same or different studies. We allow for trait heterogeneity effects. Population structure and cryptic relatedness can also be controlled. Our simulations suggest that the proposed method has improved statistical power over single-trait analysis in most of the cases we studied. We applied our method to the Continental Origins and Genetic Epidemiology Network (COGENT) African ancestry samples for three blood pressure traits and identified four loci (CHIC2, HOXA-EVX1, IGFBP1/IGFBP3, and CDH17; p < 5.0 × 10(-8)) associated with hypertension-related traits that were missed by a single-trait analysis in the original report. Six additional loci with suggestive association evidence (p < 5.0 × 10(-7)) were also observed, including CACNA1D and WNT3. Our study strongly suggests that analyzing multiple phenotypes can improve statistical power and that such analysis can be executed with the summary statistics from GWASs. Our method also provides a way to study a cross phenotype (CP) association by using summary statistics from GWASs of multiple phenotypes.
Resumo:
This study probed for an answer to the question, "How do you identify as early as possible those students who are at risk of failing or dropping out of college so that intervention can take place?" by field testing two diagnostic instruments with a group of first semester Seneca College Computer Studies students. In some respects, the research approach was such as might be taken in a pilot study. Because of the complexity of the issue, this study did not attempt to go beyond discovery, understanding and description. Although some inferences may be drawn from the results of the study, no attempt was made to establish any causal relationship between or among the factors or variables represented here. Both quantitative and qualitative data were gathered during. four resea~ch phases: background, early identification, intervention, and evaluation. To gain a better understanding of the problem of student attrition within the School of Computer Studies at Seneca College, several methods were used, including retrospective analysis of enrollment statistics, faculty and student interviews and questionnaires, and tracking of the sample population. The significance of the problem was confirmed by the results of this study. The findings further confirmed the importance of the role of faculty in student retention and support the need to improve the quality of teacher/student interaction. As well, the need __f or ~~ills as~e:ss_~ent foll,,-~ed }JY supportiv e_c_ounsell~_I'l9_ ~~d_ __ placement was supported by the findings from this study. strategies for reducing student attrition were identified by faculty and students. As part of this study, a project referred to as "A Student Alert project" (ASAP) was undertaken at the School of Computer Studies at Seneca College. Two commercial diagnostic instruments, the Noel/Levitz College Student Inventory (CSI) and the Learning and Study Strategies Inventory (LASSI), provided quantitative data which were subsequently analyzed in Phase 4 in order to assess their usefulness as early identification tools. The findings show some support for using these instruments in a two-stage approach to early identification and intervention: the CSI as an early identification instrument and the LASSI as a counselling tool for those students who have been identified as being at risk. The findings from the preliminary attempts at intervention confirmed the need for a structured student advisement program where faculty are selected, trained, and recognized for their advisor role. Based on the finding that very few students acted on the diagnostic results and recommendations, the need for institutional intervention by way of intrusive measures was confirmed.
Resumo:
This study probed for an answer to the question, "How do you identify as early as possible those students who are at risk of failing or dropping out of college so that intervention can take place?" by field testing two diagnostic instruments with a group of first semester Seneca College Computer ,Studies students. In some respects, the research approach was such as might be taken in a pilot study_ Because of the complexity of the issue, this study did not attempt to go beyond discovery, understanding and description. Although some inferences may be drawn from the results of the study, no attempt was made to establish any causal relationship between or among the factors or variables represented here. Both quantitative and qualitative data were gathered during four resea~ch phases: background, early identification, intervention, and evaluation. To gain a better understanding of the problem of student attrition within the School of Computer Studies at Seneca College, several methods were used, including retrospective analysis of enrollment statistics, faculty and student interviews and questionnaires, and tracking of the sample population. The significance of the problem was confirmed by the results of this study. The findings further confirmed the importance of the role of faculty in student retention and support the need to improve the quality of teacher/student interaction. As well, the need for skills assessmen~-followed by supportive counselling, and placement was supported by the findings from this study. strategies for reducing student attrition were identified by faculty and students. As part of this study, a project referred to as "A Student Alert Project" (ASAP) was undertaken at the School of Computer Studies at Seneca college. Two commercial diagnostic instruments, the Noel/Levitz College Student Inventory (CSI) and the Learning and Study Strategies Inventory (LASSI), provided quantitative data which were subsequently analyzed in Phase 4 in order to assess their usefulness as early identification tools. The findings show some support for using these instruments in a two-stage approach to early identification and intervention: the CSI as an early identification instrument and the LASSI as a counselling tool for those students who have been identified as being at risk. The findings from the preliminary attempts at intervention confirmed the need for a structured student advisement program where faculty are selected, trained, and recognized for their advisor role. Based on the finding that very few students acted on the diagnostic results and recommendations, the need for institutional intervention by way of intrusive measures was confirmed.
Resumo:
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
The International Citicoline Trial in acUte Stroke is a sequential phase III study of the use of the drug citicoline in the treatment of acute ischaemic stroke, which was initiated in 2006 in 56 treatment centres. The primary objective of the trial is to demonstrate improved recovery of patients randomized to citicoline relative to those randomized to placebo after 12 weeks of follow-up. The primary analysis will take the form of a global test combining the dichotomized results of assessments on three well-established scales: the Barthel Index, the modified Rankin scale and the National Institutes of Health Stroke Scale. This approach was previously used in the analysis of the influential National Institute of Neurological Disorders and Stroke trial of recombinant tissue plasminogen activator in stroke. The purpose of this paper is to describe how this trial was designed, and in particular how the simultaneous objectives of taking into account three assessment scales, performing a series of interim analyses and conducting treatment allocation and adjusting the analyses to account for prognostic factors, including more than 50 treatment centres, were addressed. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
Although the sunspot-number series have existed since the mid-19th century, they are still the subject of intense debate, with the largest uncertainty being related to the "calibration" of the visual acuity of individual observers in the past. Daisy-chain regression methods are applied to inter-calibrate the observers which may lead to significant bias and error accumulation. Here we present a novel method to calibrate the visual acuity of the key observers to the reference data set of Royal Greenwich Observatory sunspot groups for the period 1900-1976, using the statistics of the active-day fraction. For each observer we independently evaluate their observational thresholds [S_S] defined such that the observer is assumed to miss all of the groups with an area smaller than S_S and report all the groups larger than S_S. Next, using a Monte-Carlo method we construct, from the reference data set, a correction matrix for each observer. The correction matrices are significantly non-linear and cannot be approximated by a linear regression or proportionality. We emphasize that corrections based on a linear proportionality between annually averaged data lead to serious biases and distortions of the data. The correction matrices are applied to the original sunspot group records for each day, and finally the composite corrected series is produced for the period since 1748. The corrected series displays secular minima around 1800 (Dalton minimum) and 1900 (Gleissberg minimum), as well as the Modern grand maximum of activity in the second half of the 20th century. The uniqueness of the grand maximum is confirmed for the last 250 years. It is shown that the adoption of a linear relationship between the data of Wolf and Wolfer results in grossly inflated group numbers in the 18th and 19th centuries in some reconstructions.
Resumo:
In chemical analyses performed by laboratories, one faces the problem of determining the concentration of a chemical element in a sample. In practice, one deals with the problem using the so-called linear calibration model, which considers that the errors associated with the independent variables are negligible compared with the former variable. In this work, a new linear calibration model is proposed assuming that the independent variables are subject to heteroscedastic measurement errors. A simulation study is carried out in order to verify some properties of the estimators derived for the new model and it is also considered the usual calibration model to compare it with the new approach. Three applications are considered to verify the performance of the new approach. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
The generalized Birnbaum-Saunders (GBS) distribution is a new class of positively skewed models with lighter and heavier tails than the traditional Birnbaum-Saunders (BS) distribution, which is largely applied to study lifetimes. However, the theoretical argument and the interesting properties of the GBS model have made its application possible beyond the lifetime analysis. The aim of this paper is to present the GBS distribution as a useful model for describing pollution data and deriving its positive and negative moments. Based on these moments, we develop estimation and goodness-of-fit methods. Also, some properties of the proposed estimators useful for developing asymptotic inference are presented. Finally, an application with real data from Environmental Sciences is given to illustrate the methodology developed. This example shows that the empirical fit of the GBS distribution to the data is very good. Thus, the GBS model is appropriate for describing air pollutant concentration data, which produces better results than the lognormal model when the administrative target is determined for abating air pollution. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
There are several versions of the lognormal distribution in the statistical literature, one is based in the exponential transformation of generalized normal distribution (GN). This paper presents the Bayesian analysis for the generalized lognormal distribution (logGN) considering independent non-informative Jeffreys distributions for the parameters as well as the procedure for implementing the Gibbs sampler to obtain the posterior distributions of parameters. The results are used to analyze failure time models with right-censored and uncensored data. The proposed method is illustrated using actual failure time data of computers.
Multivariate quality control studies applied to Ca(II) and Mg(II) determination by a portable method
Resumo:
A portable or field test method for simultaneous spectrophotometric determination of calcium and magnesium in water using multivariate partial least squares (PLS) calibration methods is proposed. The method is based on the reaction between the analytes and methylthymol blue at pH 11. The spectral information was used as the X-block, and the Ca(II) and Mg(II) concentrations obtained by a reference technique (ICP-AES) were used as the Y-block. Two series of analyses were performed, with a month's difference between them. The first series was used as the calibration set and the second one as the validation set. Multivariate statistical process control (MSPC) techniques, based on statistics from principal component models, were used to study the features and evolution with time of the spectral signals. Signal standardization was used to correct the deviations between series. Method validation was performed by comparing the predictions of the PLS model with the reference Ca(II) and Mg(II) concentrations determined by ICP-AES using the joint interval test for the slope and intercept of the regression line with errors in both axes. (C) 1998 John Wiley & Sons, Ltd.
Resumo:
Exact and closed-form expressions for the level crossing rate and average fade duration are presented for the M branch pure selection combining (PSC), equal gain combining (EGC), and maximal ratio combining (MRC) techniques, assuming independent branches in a Nakagami environment. The analytical results are thoroughly validated by reducing the general case to some special cases, for which the solutions are known, and by means of simulation for the more general case. The model developed here is general and can be easily applied to other fading statistics (e.g., Rice).
Resumo:
This paper introduces a methodology for predicting the surface roughness of advanced ceramics using Adaptive Neuro-Fuzzy Inference System (ANFIS). To this end, a grinding machine was used, equipped with an acoustic emission sensor and a power transducer connected to the electric motor rotating the diamond grinding wheel. The alumina workpieces used in this work were pressed and sintered into rectangular bars. Acoustic emission and cutting power signals were collected during the tests and digitally processed to calculate the mean, standard deviation, and two other statistical data. These statistics, as well the root mean square of the acoustic emission and cutting power signals were used as input data for ANFIS. The output values of surface roughness (measured during the tests) were implemented for training and validation of the model. The results indicated that an ANFIS network is an excellent tool when applied to predict the surface roughness of ceramic workpieces in the grinding process.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Conselho Nacional de Desenvolvimento Cientifico e Tecnológico (CNPq)