30 resultados para Variable sample size

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical software is now commonly available to calculate Power (P') and sample size (N) for most experimental designs. In many circumstances, however, sample size is constrained by lack of time, cost, and in research involving human subjects, the problems of recruiting suitable individuals. In addition, the calculation of N is often based on erroneous assumptions about variability and therefore such estimates are often inaccurate. At best, we would suggest that such calculations provide only a very rough guide of how to proceed in an experiment. Nevertheless, calculation of P' is very useful especially in experiments that have failed to detect a difference which the experimenter thought was present. We would recommend that P' should always be calculated in these circumstances to determine whether the experiment was actually too small to test null hypotheses adequately.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concept of sample size and statistical power estimation is now something that Optometrists that want to perform research, whether it be in practice or in an academic institution, cannot simply hide away from. Ethics committees, journal editors and grant awarding bodies are now increasingly requesting that all research be backed up with sample size and statistical power estimation in order to justify any study and its findings. This article presents a step-by-step guide of the process for determining sample sizeand statistical power. It builds on statistical concepts presented in earlier articles in Optometry Today by Richard Armstrong and Frank Eperjesi.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The performance of seven minimization algorithms are compared on five neural network problems. These include a variable-step-size algorithm, conjugate gradient, and several methods with explicit analytic or numerical approximations to the Hessian.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Researchers often use 3-way interactions in moderated multiple regression analysis to test the joint effect of 3 independent variables on a dependent variable. However, further probing of significant interaction terms varies considerably and is sometimes error prone. The authors developed a significance test for slope differences in 3-way interactions and illustrate its importance for testing psychological hypotheses. Monte Carlo simulations revealed that sample size, magnitude of the slope difference, and data reliability affected test power. Application of the test to published data yielded detection of some slope differences that were undetected by alternative probing techniques and led to changes of results and conclusions. The authors conclude by discussing the test's applicability for psychological research. Copyright 2006 by the American Psychological Association.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The principal theme of this thesis is the identification of additional factors affecting, and consequently to better allow, the prediction of soft contact lens fit. Various models have been put forward in an attempt to predict the parameters that influence soft contact lens fit dynamics; however, the factors that influence variation in soft lens fit are still not fully understood. The investigations in this body of work involved the use of a variety of different imaging techniques to both quantify the anterior ocular topography and assess lens fit. The use of Anterior-Segment Optical Coherence Tomography (AS-OCT) allowed for a more complete characterisation of the cornea and corneoscleral profile (CSP) than either conventional keratometry or videokeratoscopy alone, and for the collection of normative data relating to the CSP for a substantial sample size. The scleral face was identified as being rotationally asymmetric, the mean corneoscleral junction (CSJ) angle being sharpest nasally and becoming progressively flatter at the temporal, inferior and superior limbal junctions. Additionally, 77% of all CSJ angles were within ±50 of 1800, demonstrating an almost tangential extension of the cornea to form the paralimbal sclera. Use of AS-OCT allowed for a more robust determination of corneal diameter than that of white-to-white (WTW) measurement, which is highly variable and dependent on changes in peripheral corneal transparency. Significant differences in ocular topography were found between different ethnicities and sexes, most notably for corneal diameter and corneal sagittal height variables. Lens tightness was found to be significantly correlated with the difference between horizontal CSJ angles (r =+0.40, P =0.0086). Modelling of the CSP data gained allowed for prediction of up to 24% of the variance in contact lens fit; however, it was likely that stronger associations and an increase in the modelled prediction of variance in fit may have occurred had an objective method of lens fit assessment have been made. A subsequent investigation to determine the validity and repeatability of objective contact lens fit assessment using digital video capture showed no significant benefit over subjective evaluation. The technique, however, was employed in the ensuing investigation to show significant changes in lens fit between 8 hours (the longest duration of wear previously examined) and 16 hours, demonstrating that wearing time is an additional factor driving lens fit dynamics. The modelling of data from enhanced videokeratoscopy composite maps alone allowed for up to 77% of the variance in soft contact lens fit, and up to almost 90% to be predicted when used in conjunction with OCT. The investigations provided further insight into the ocular topography and factors affecting soft contact lens fit.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of regression under Gaussian assumptions is treated generally. The relationship between Bayesian prediction, regularization and smoothing is elucidated. The ideal regression is the posterior mean and its computation scales as O(n3), where n is the sample size. We show that the optimal m-dimensional linear model under a given prior is spanned by the first m eigenfunctions of a covariance operator, which is a trace-class operator. This is an infinite dimensional analogue of principal component analysis. The importance of Hilbert space methods to practical statistics is also discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: To describe the methodology, sampling strategy and preliminary results for the Aston Eye Study (AES), a cross-sectional study to determine the prevalence of refractive error and its associated ocular biometry in a large multi-racial sample of school children from the metropolitan area of Birmingham, England. Methods: A target sample of 1700 children aged 6–7 years and 1200 aged 12–13 years is being selected from Birmingham schools selected randomly with stratification by area deprivation index (a measure of socio-economic status). Schools with pupils predominantly (>70%) from a single race are excluded. Sample size calculations account for the likely participation rate and the clustering of individuals within schools. Procedures involve standardised protocols to allow for comparison with international population-based data. Visual acuity, non-contact ocular biometry (axial length, corneal radius of curvature and anterior chamber depth) and cycloplegic autorefraction are measured in both eyes. Distance and near oculomotor balance, height and weight are also assessed. Questionnaires for parents and older children will allow the influence of environmental factors on refractive error to be examined. Results: Recruitment and data collection are ongoing (currently N = 655). Preliminary cross-sectional data on 213 South Asian, 44 black African Caribbean and 70 white European children aged 6–7 years and 114 South Asian, 40 black African Caribbean and 115 white European children aged 12–13 years found myopia prevalence of 9.4% and 29.4% for the two age groups respectively. A more negative mean spherical equivalent refraction (SER) was observed in older children (-0.21 D vs +0.87 D). Ethnic differences in myopia prevalence are emerging with South Asian children having higher levels than white European children 36.8% vs 18.6% (for the older children). Axial length, corneal radius of curvature and anterior chamber depth were normally distributed, while SER was leptokurtic (p < 0.001) with a slight negative skew. Conclusions: The AES will allow ethnic differences in the ocular characteristics of children from a large metropolitan area of the UK to be examined. The findings to date indicate the emergence of higher levels of myopia by early adolescence in second and third generation British South Asians, compared to white European children. The continuation of the AES will allow the early determinants of these ethnic differences to be studied.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background There is a paucity of data describing the prevalence of childhood refractive error in the United Kingdom. The Northern Ireland Childhood Errors of Refraction study, along with its sister study the Aston Eye Study, are the first population-based surveys of children using both random cluster sampling and cycloplegic autorefraction to quantify levels of refractive error in the United Kingdom. Methods Children aged 6–7 years and 12–13 years were recruited from a stratified random sample of primary and post-primary schools, representative of the population of Northern Ireland as a whole. Measurements included assessment of visual acuity, oculomotor balance, ocular biometry and cycloplegic binocular open-field autorefraction. Questionnaires were used to identify putative risk factors for refractive error. Results 399 (57%) of 6–7 years and 669 (60%) of 12–13 years participated. School participation rates did not vary statistically significantly with the size of the school, whether the school is urban or rural, or whether it is in a deprived/non-deprived area. The gender balance, ethnicity and type of schooling of participants are reflective of the Northern Ireland population. Conclusions The study design, sample size and methodology will ensure accurate measures of the prevalence of refractive errors in the target population and will facilitate comparisons with other population-based refractive data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: The purpose of the paper is to examine the kind of HRM practices being implemented by overseas firms in their Indian subsidiaries and also to analyze the linkage between HRM practices and organizational performance. Design/methodology/approach: The paper utilizes a mixture of both quantitative and qualitative techniques via personal interviews in 76 subsidiaries. Findings: The results show that while the introduction of HRM practices from the foreign parent organization is negatively associated with performance, local adaption of HRM practices is positively related with the performance of foreign firms operating in India. Research limitations/implications: The main limitations include data being collected by only one respondent from each firm, and the relatively small sample size. Practical implications: The key message for practitioners is that HRM systems do improve organizational performance in the Indian subsidiaries of foreign firms, and an emphasis on the localization of HRM practices can further contribute in this regard. Originality/value: This is perhaps the very first investigation of its kind in the Indian context. © Emerald Group Publishing Limited.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the advent of globalisation companies all around the world must improve their performance in order to survive. The threats are coming from everywhere, and in different ways, such as low cost products, high quality products, new technologies, and new products. Different companies in different countries are using various techniques and using quality criteria items to strive for excellence. Continuous improvement techniques are used to enable companies to improve their operations. Therefore, companies are using techniques such as TQM, Kaizen, Six-Sigma, Lean Manufacturing, and quality award criteria items such as Customer Focus, Human Resources, Information & Analysis, and Process Management. The purpose of this paper is to compare the use of these techniques and criteria items in two countries, Mexico and the United Kingdom, which differ in culture and industrial structure. In terms of the use of continuous improvement tools and techniques, Mexico formally started to deal with continuous improvement by creating its National Quality Award soon after the Americans began the Malcolm Baldrige National Quality Award. The United Kingdom formally started by using the European Quality Award (EQA), modified and renamed as the EFQM Excellence Model. The methodology used in this study was to undertake a literature review of the subject matter and to study some general applications around the world. A questionnaire survey was then designed and a survey undertaken based on the same scale, about the same sample size, and the about the same industrial sector within the two countries. The survey presents a brief definition of each of the constructs to facilitate understanding of the questions. The analysis of the data was then conducted with the assistance of a statistical software package. The survey results indicate both similarities and differences in the strengths and weaknesses of the companies in the two countries. One outcome of the analysis is that it enables the companies to use the results to benchmark themselves and thus act to reinforce their strengths and to reduce their weaknesses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Increasing mail-survey response using monetary incentives is a proven, but not always cost-effective method in every population. This paper tackles the questions of whether it is worth using monetary incentives and the size of the inducement by testing a logit model of the impact of prepaid monetary incentives on response rates in consumer and organizational mail surveys. The results support their use and show that the inducement value makes a significant impact on the effect size. Importantly, no significant differences were found between consumer and organizational populations. A cost-benefit model is developed to estimate the optimum incentive when attempting to minimize overall survey costs for a given sample size. © 2006 Operational Research Society Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In industrialised countries age-related macular disease (ARMD) is the leading cause of visual loss in older people. Because oxidative stress is purported to be associated with an increased risk of disease development the role of antioxidant supplementation is of interest. Lutein is a carotenoid antioxidant that accumulates within the retina and is thought to filter blue light. Increased levels of lutein have been associated with reduced risk of developing ARMD and improvements in visual and retinal function in eyes with ARMD. The aim of this randomised controlled trial (RCT) was to investigate the effect of a lutein-based nutritional supplement on subjective and objective measures of visual function in healthy eyes and in eyes with age-related maculopathy (ARM) – an early form of ARMD. Supplement withdrawal effects were also investigated. A sample size of 66 healthy older (HO), healthy younger (HY), and ARM eyes were randomly allocated to receive a lutein-based supplement or no treatment for 40 weeks. The supplemented group then stopped supplementation to look at the effects of withdrawal over a further 20 weeks. The primary outcome measure was multifocal electroretinogram (mfERG) N1P1 amplitude. Secondary outcome measures were mfERG N1, P1 and N2 latency, contrast sensitivity (CS), Visual acuity (VA) and macular pigment optical density (MPOD). Sample sizes were sufficient for the RCT to have an 80% power to detect a significant clinical effect at the 5% significance level for all outcome measures when the healthy eye groups were combined, and CS, VA and mfERG in the ARM group. This RCT demonstrates significant improvements in MPOD in HY and HO supplemented eyes. When HY and HO supplemented groups were combined, MPOD improvements were maintained, and mfERG ring 2 P1 latency became shorter. On withdrawal of the supplement mfERG ring 1 N1P1 amplitude reduced in HO eyes. When HO and HY groups were combined, mfERG ring 1 and ring 2 N1P1 amplitudes were reduced. In ARM eyes, ring 3 N2 latency and ring 4 P1 latency became longer. These statistically significant changes may not be clinically significant. The finding that a lutein-based supplement increases MPOD in healthy eyes, but does not increase mfERG amplitudes contrasts with the CARMIS study and contributes to the debate on the use of nutritional supplementation in ARM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Experiments combining different groups or factors and which use ANOVA are a powerful method of investigation in applied microbiology. ANOVA enables not only the effect of individual factors to be estimated but also their interactions; information which cannot be obtained readily when factors are investigated separately. In addition, combining different treatments or factors in a single experiment is more efficient and often reduces the sample size required to estimate treatment effects adequately. Because of the treatment combinations used in a factorial experiment, the degrees of freedom (DF) of the error term in the ANOVA is a more important indicator of the ‘power’ of the experiment than the number of replicates. A good method is to ensure, where possible, that sufficient replication is present to achieve 15 DF for the error term of the ANOVA testing effects of particular interest. Finally, it is important to always consider the design of the experiment because this determines the appropriate ANOVA to use. Hence, it is necessary to be able to identify the different forms of ANOVA appropriate to different experimental designs and to recognise when a design is a split-plot or incorporates a repeated measure. If there is any doubt about which ANOVA to use in a specific circumstance, the researcher should seek advice from a statistician with experience of research in applied microbiology.