883 resultados para Variable sample size X- control chart


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many studies of quantitative and disease traits in human genetics rely upon self-reported measures. Such measures are based on questionnaires or interviews and are often cheaper and more readily available than alternatives. However, the precision and potential bias cannot usually be assessed. Here we report a detailed quantitative genetic analysis of stature. We characterise the degree of measurement error by utilising a large sample of Australian twin pairs (857 MZ, 815 DZ) with both clinical and self-reported measures of height. Self-report height measurements are shown to be more variable than clinical measures. This has led to lowered estimates of heritability in many previous studies of stature. In our twin sample the heritability estimate for clinical height exceeded 90%. Repeated measures analysis shows that 2-3 times as many self-report measures are required to recover heritability estimates similar to those obtained from clinical measures. Bivariate genetic repeated measures analysis of self-report and clinical height measures showed an additive genetic correlation > 0.98. We show that the accuracy of self-report height is upwardly biased in older individuals and in individuals of short stature. By comparing clinical and self-report measures we also showed that there was a genetic component to females systematically reporting their height incorrectly; this phenomenon appeared to not be present in males. The results from the measurement error analysis were subsequently used to assess the effects of error on the power to detect linkage in a genome scan. Moderate reduction in error (through the use of accurate clinical or multiple self-report measures) increased the effective sample size by 22%; elimination of measurement error led to increases in effective sample size of 41%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Among the Solar System’s bodies, Moon, Mercury and Mars are at present, or have been in the recent years, object of space missions aimed, among other topics, also at improving our knowledge about surface composition. Between the techniques to detect planet’s mineralogical composition, both from remote and close range platforms, visible and near-infrared reflectance (VNIR) spectroscopy is a powerful tool, because crystal field absorption bands are related to particular transitional metals in well-defined crystal structures, e.g., Fe2+ in M1 and M2 sites of olivine or pyroxene (Burns, 1993). Thanks to the improvements in the spectrometers onboard the recent missions, a more detailed interpretation of the planetary surfaces can now be delineated. However, quantitative interpretation of planetary surface mineralogy could not always be a simple task. In fact, several factors such as the mineral chemistry, the presence of different minerals that absorb in a narrow spectral range, the regolith with a variable particle size range, the space weathering, the atmosphere composition etc., act in unpredictable ways on the reflectance spectra on a planetary surface (Serventi et al., 2014). One method for the interpretation of reflectance spectra of unknown materials involves the study of a number of spectra acquired in the laboratory under different conditions, such as different mineral abundances or different particle sizes, in order to derive empirical trends. This is the methodology that has been followed in this PhD thesis: the single factors previously listed have been analyzed, creating, in the laboratory, a set of terrestrial analogues with well-defined composition and size. The aim of this work is to provide new tools and criteria to improve the knowledge of the composition of planetary surfaces. In particular, mixtures composed with different content and chemistry of plagioclase and mafic minerals have been spectroscopically analyzed at different particle sizes and with different mineral relative percentages. The reflectance spectra of each mixture have been analyzed both qualitatively (using the software ORIGIN®) and quantitatively applying the Modified Gaussian Model (MGM, Sunshine et al., 1990) algorithm. In particular, the spectral parameter variations of each absorption band have been evaluated versus the volumetric FeO% content in the PL phase and versus the PL modal abundance. This delineated calibration curves of composition vs. spectral parameters and allow implementation of spectral libraries. Furthermore, the trends derived from terrestrial analogues here analyzed and from analogues in the literature have been applied for the interpretation of hyperspectral images of both plagioclase-rich (Moon) and plagioclase-poor (Mars) bodies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of seven minimization algorithms are compared on five neural network problems. These include a variable-step-size algorithm, conjugate gradient, and several methods with explicit analytic or numerical approximations to the Hessian.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Researchers often use 3-way interactions in moderated multiple regression analysis to test the joint effect of 3 independent variables on a dependent variable. However, further probing of significant interaction terms varies considerably and is sometimes error prone. The authors developed a significance test for slope differences in 3-way interactions and illustrate its importance for testing psychological hypotheses. Monte Carlo simulations revealed that sample size, magnitude of the slope difference, and data reliability affected test power. Application of the test to published data yielded detection of some slope differences that were undetected by alternative probing techniques and led to changes of results and conclusions. The authors conclude by discussing the test's applicability for psychological research. Copyright 2006 by the American Psychological Association.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of antibiotics was investigated in twelve acute hospitals in England. Data was collected electronically and by questionnaire for the financial years 2001/2, 2002/3 and 2003/4. Hospitals were selected on the basis of their Medicines Management Self-Assessment Scores (MMAS) and included a cohort of three hospitals with integrated electronic prescribing systems. The total sample size was 6.65% of English NHS activity for 2001/2 based on Finished Consultant Episode (FCE) numbers. Data collected included all antibiotics dispensed (ATC category J01), hospital activity FCE's and beddays, Medicines Management Self-assessment scores, Antibiotic Medicines Management scores (AMS), Primary Care Trust (PCT) of origin of referral populations, PCT antibiotic prescribing rates, Index of Multiple Deprivation for each PCT. The DDD/FCE (Defined Daily Dose/FCE) was found to correlate with the DDD 100beddays (r = 0.74 psample mean at 3.48 DDD/FCE in 2001/2 and 3.34 DDD/FCE in 2003/4. The MMAS and AMS were found to correlate (r = 0.74 pcontrol of antibiotic use. No correlation was found between the MMAS and a range of qualitative indicators of antibiotic use. A number of indicators are proposed as triggers for further investigation including a proportion of 0.24 for the ratio of third generation to first/second generation cephalosporin use, and five percent as the limit for parenteral quinolone DOD of total quinolone DOD usage. It was possible to demonstrate a correlation between the IMD 2000 and primary care antibiotic prescribing rates but not between primary and secondary care antibiotic prescribing rates for the same referral population or between the weighted mean IMD 2000 for each hospital's referral population and the hospital antibiotic prescribing rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Citation information: Armstrong RA, Davies LN, Dunne MCM & Gilmartin B. Statistical guidelines for clinical studies of human vision. Ophthalmic Physiol Opt 2011, 31, 123-136. doi: 10.1111/j.1475-1313.2010.00815.x ABSTRACT: Statistical analysis of data can be complex and different statisticians may disagree as to the correct approach leading to conflict between authors, editors, and reviewers. The objective of this article is to provide some statistical advice for contributors to optometric and ophthalmic journals, to provide advice specifically relevant to clinical studies of human vision, and to recommend statistical analyses that could be used in a variety of circumstances. In submitting an article, in which quantitative data are reported, authors should describe clearly the statistical procedures that they have used and to justify each stage of the analysis. This is especially important if more complex or 'non-standard' analyses have been carried out. The article begins with some general comments relating to data analysis concerning sample size and 'power', hypothesis testing, parametric and non-parametric variables, 'bootstrap methods', one and two-tail testing, and the Bonferroni correction. More specific advice is then given with reference to particular statistical procedures that can be used on a variety of types of data. Where relevant, examples of correct statistical practice are given with reference to recently published articles in the optometric and ophthalmic literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last few years, zonisamide has been proposed as a potentially useful medication for patients with focal seizures, with or without secondary generalization. Since psychiatric adverse effects, including mania, psychosis, and suicidal ideation, have been associated with its use, it was suggested that the presence of antecedent psychiatric disorders is an important factor associated with the discontinuation of zonisamide therapy in patients with epilepsy. We, therefore, set out to assess the tolerability profile of zonisamide in a retrospective chart review of 23 patients with epilepsy and comorbid mental disorders, recruited from two specialist pediatric (n=11) and adult (n=12) neuropsychiatry clinics. All patients had a clinical diagnosis of treatment-refractory epilepsy after extensive neurophysiological and neuroimaging investigations. The vast majority of patients (n=22/23, 95.7%) had tried previous antiepileptic medications, and most adult patients (n=9/11, 81.8%) were on concomitant medication for epilepsy. In the majority of cases, the psychiatric adverse effects of zonisamide were not severe. Four patients (17.4%) discontinued zonisamide because of lack of efficacy, whereas only one patient (4.3%) discontinued it because of the severity of psychiatric adverse effects (major depressive disorder). The low discontinuation rate of zonisamide in a selected population of patients with epilepsy and neuropsychiatric comorbidity suggests that this medication is safe and reasonably well-tolerated for use in patients with treatment-refractory epilepsy. Given the limitations of the present study, including the relatively small sample size, further research is warranted to confirm this finding. © 2013 Elsevier Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The principal theme of this thesis is the identification of additional factors affecting, and consequently to better allow, the prediction of soft contact lens fit. Various models have been put forward in an attempt to predict the parameters that influence soft contact lens fit dynamics; however, the factors that influence variation in soft lens fit are still not fully understood. The investigations in this body of work involved the use of a variety of different imaging techniques to both quantify the anterior ocular topography and assess lens fit. The use of Anterior-Segment Optical Coherence Tomography (AS-OCT) allowed for a more complete characterisation of the cornea and corneoscleral profile (CSP) than either conventional keratometry or videokeratoscopy alone, and for the collection of normative data relating to the CSP for a substantial sample size. The scleral face was identified as being rotationally asymmetric, the mean corneoscleral junction (CSJ) angle being sharpest nasally and becoming progressively flatter at the temporal, inferior and superior limbal junctions. Additionally, 77% of all CSJ angles were within ±50 of 1800, demonstrating an almost tangential extension of the cornea to form the paralimbal sclera. Use of AS-OCT allowed for a more robust determination of corneal diameter than that of white-to-white (WTW) measurement, which is highly variable and dependent on changes in peripheral corneal transparency. Significant differences in ocular topography were found between different ethnicities and sexes, most notably for corneal diameter and corneal sagittal height variables. Lens tightness was found to be significantly correlated with the difference between horizontal CSJ angles (r =+0.40, P =0.0086). Modelling of the CSP data gained allowed for prediction of up to 24% of the variance in contact lens fit; however, it was likely that stronger associations and an increase in the modelled prediction of variance in fit may have occurred had an objective method of lens fit assessment have been made. A subsequent investigation to determine the validity and repeatability of objective contact lens fit assessment using digital video capture showed no significant benefit over subjective evaluation. The technique, however, was employed in the ensuing investigation to show significant changes in lens fit between 8 hours (the longest duration of wear previously examined) and 16 hours, demonstrating that wearing time is an additional factor driving lens fit dynamics. The modelling of data from enhanced videokeratoscopy composite maps alone allowed for up to 77% of the variance in soft contact lens fit, and up to almost 90% to be predicted when used in conjunction with OCT. The investigations provided further insight into the ocular topography and factors affecting soft contact lens fit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* This work was financially supported by RFBR-04-01-00858.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* This work was financially supported by RFBR-04-01-00858.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 45A05, 45B05, 45E05,45P05, 46E30

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The microstructure and thermoelectric properties of Yb-doped Ca0.9-x Yb x La0.1 MnO3 (0 ≤ x ≤ 0.05) ceramics prepared by using the Pechini method derived powders have been investigated. X-ray diffraction analysis has shown that all samples exhibit single phase with orthorhombic perovskite structure. All ceramic samples possess high relative densities, ranging from 97.04% to 98.65%. The Seebeck coefficient is negative, indicating n-type conduction in all samples. The substitution of Yb for Ca leads to a marked decrease in the electrical resistivity, along with a moderate decrease in the absolute value of the Seebeck coefficient. The highest power factor is obtained for the sample with x = 0.05. The electrical conduction in these compounds is due to electrons hopping between Mn3+ and Mn4+, which is enhanced by increasing Yb content.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Privately owned water utilities typically operate under a regulated monopoly regime. Price-cap regulation has been introduced as a means to enhance efficiency and innovation. The main objective of this paper is to propose a methodology for measuring productivity change across companies and over time when the sample size is limited. An empirical application is developed for the UK water and sewerage companies (WaSCs) for the period 1991-2008. A panel index approach is applied to decompose and derive unit-specific productivity growth as a function of the productivity growth achieved by benchmark firms, and the catch-up to the benchmark firm achieved by less productive firms. The results indicated that significant gains in productivity occurred after 2000, when the regulator set tighter reviews. However, the average WaSC still must improve towards the benchmarking firm by 2.69% over a period of five years to achieve comparable performance. This study is relevant to regulators who are interested in developing comparative performance measurement when the number of water companies that can be evaluated is limited. Moreover, setting an appropriate X factor is essential to improve the efficiency of water companies and this study helps to achieve this challenge.