21 resultados para 230204 Applied Statistics
Resumo:
Studies on learning management systems have largely been technical in nature with an emphasis on the evaluation of the human computer interaction (HCI) processes in using the LMS. This paper reports a study that evaluates the information interaction processes on an eLearning course used in teaching an applied Statistics course. The eLearning course is used as a synonym for information systems. The study explores issues of missing context in stored information in information systems. Using the semiotic framework as a guide, the researchers evaluated an existing eLearning course with the view to proposing a model for designing improved eLearning courses for future eLearning programmes. In this exploratory study, a survey questionnaire is used to collect data from 160 participants on an eLearning course in Statistics in Applied Climatology. The views of the participants are analysed with a focus on only the human information interaction issues. Using the semiotic framework as a guide, syntactic, semantic, pragmatic and social context gaps or problems were identified. The information interactions problems identified include ambiguous instructions, inadequate information, lack of sound, interface design problems among others. These problems affected the quality of new knowledge created by the participants. The researchers thus highlighted the challenges of missing information context when data is stored in an information system. The study concludes by proposing a human information interaction model for improving the information interaction quality issues in the design of eLearning course on learning management platforms and those other information systems.
Resumo:
Sensory thresholds are often collected through ascending forced-choice methods. Group thresholds are important for comparing stimuli or populations; yet, the method has two problems. An individual may correctly guess the correct answer at any concentration step and might detect correctly at low concentrations but become adapted or fatigued at higher concentrations. The survival analysis method deals with both issues. Individual sequences of incorrect and correct answers are adjusted, taking into account the group performance at each concentration. The technique reduces the chance probability where there are consecutive correct answers. Adjusted sequences are submitted to survival analysis to determine group thresholds. The technique was applied to an aroma threshold and a taste threshold study. It resulted in group thresholds similar to ASTM or logarithmic regression procedures. Significant differences in taste thresholds between younger and older adults were determined. The approach provides a more robust technique over previous estimation methods.
Resumo:
We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
The International Citicoline Trial in acUte Stroke is a sequential phase III study of the use of the drug citicoline in the treatment of acute ischaemic stroke, which was initiated in 2006 in 56 treatment centres. The primary objective of the trial is to demonstrate improved recovery of patients randomized to citicoline relative to those randomized to placebo after 12 weeks of follow-up. The primary analysis will take the form of a global test combining the dichotomized results of assessments on three well-established scales: the Barthel Index, the modified Rankin scale and the National Institutes of Health Stroke Scale. This approach was previously used in the analysis of the influential National Institute of Neurological Disorders and Stroke trial of recombinant tissue plasminogen activator in stroke. The purpose of this paper is to describe how this trial was designed, and in particular how the simultaneous objectives of taking into account three assessment scales, performing a series of interim analyses and conducting treatment allocation and adjusting the analyses to account for prognostic factors, including more than 50 treatment centres, were addressed. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
Although the sunspot-number series have existed since the mid-19th century, they are still the subject of intense debate, with the largest uncertainty being related to the "calibration" of the visual acuity of individual observers in the past. Daisy-chain regression methods are applied to inter-calibrate the observers which may lead to significant bias and error accumulation. Here we present a novel method to calibrate the visual acuity of the key observers to the reference data set of Royal Greenwich Observatory sunspot groups for the period 1900-1976, using the statistics of the active-day fraction. For each observer we independently evaluate their observational thresholds [S_S] defined such that the observer is assumed to miss all of the groups with an area smaller than S_S and report all the groups larger than S_S. Next, using a Monte-Carlo method we construct, from the reference data set, a correction matrix for each observer. The correction matrices are significantly non-linear and cannot be approximated by a linear regression or proportionality. We emphasize that corrections based on a linear proportionality between annually averaged data lead to serious biases and distortions of the data. The correction matrices are applied to the original sunspot group records for each day, and finally the composite corrected series is produced for the period since 1748. The corrected series displays secular minima around 1800 (Dalton minimum) and 1900 (Gleissberg minimum), as well as the Modern grand maximum of activity in the second half of the 20th century. The uniqueness of the grand maximum is confirmed for the last 250 years. It is shown that the adoption of a linear relationship between the data of Wolf and Wolfer results in grossly inflated group numbers in the 18th and 19th centuries in some reconstructions.