14 resultados para Collectionwise Normality
em Aston University Research Archive
Resumo:
This paper investigates whether the non-normality typically observed in daily stock-market returns could arise because of the joint existence of breaks and GARCH effects. It proposes a data-driven procedure to credibly identify the number and timing of breaks and applies it on the benchmark stock-market indices of 27 OECD countries. The findings suggest that a substantial element of the observed deviations from normality might indeed be due to the co-existence of breaks and GARCH effects. However, the presence of structural changes is found to be the primary reason for the non-normality and not the GARCH effects. Also, there is still some remaining excess kurtosis that is unlikely to be linked to the specification of the conditional volatility or the presence of breaks. Finally, an interesting sideline result implies that GARCH models have limited capacity in forecasting stock-market volatility.
Resumo:
Different types of numerical data can be collected in a scientific investigation and the choice of statistical analysis will often depend on the distribution of the data. A basic distinction between variables is whether they are ‘parametric’ or ‘non-parametric’. When a variable is parametric, the data come from a symmetrically shaped distribution known as the ‘Gaussian’ or ‘normal distribution’ whereas non-parametric variables may have a distribution which deviates markedly in shape from normal. This article describes several aspects of the problem of non-normality including: (1) how to test for two common types of deviation from a normal distribution, viz., ‘skew’ and ‘kurtosis’, (2) how to fit the normal distribution to a sample of data, (3) the transformation of non-normally distributed data and scores, and (4) commonly used ‘non-parametric’ statistics which can be used in a variety of circumstances.
Resumo:
Purpose: To develop a questionnaire that subjectively assesses near visual function in patients with 'accommodating' intraocular lenses (IOLs). Methods: A literature search of existing vision-related quality-of-life instruments identified all questions relating to near visual tasks. Questions were combined if repeated in multiple instruments. Further relevant questions were added and item interpretation confirmed through multidisciplinary consultation and focus groups. A preliminary 19-item questionnaire was presented to 22 subjects at their 4-week visit post first eye phacoemulsification with 'accommodative' IOL implantation, and again 6 and 12 weeks post-operatively. Rasch Analysis, Frequency of Endorsement, and tests of normality (skew and kurtosis) were used to reduce the instrument. Cronbach's alpha and test-retest reliability (intraclass correlation coefficient, ICC) were determined for the final questionnaire. Construct validity was obtained by Pearson's product moment correlation (PPMC) of questionnaire scores to reading acuity (RA) and to Critical Print Size (CPS) reading speed. Criterion validity was obtained by receiver operating characteristic (ROC) curve analysis and dimensionality of the questionnaire was assessed by factor analysis. Results: Rasch Analysis eliminated nine items due to poor fit statistics. The final items have good separation (2.55), internal consistency (Cronbach's α = 0.97) and test-retest reliability (ICC = 0.66). PPMC of questionnaire scores with RA was 0.33, and with CPS reading speed was 0.08. Area under the ROC curve was 0.88 and Factor Analysis revealed one principal factor. Conclusion: The pilot data indicates the questionnaire to be internally consistent, reliable and a valid instrument that could be useful for assessing near visual function in patients with 'accommodating' IOLS. The questionnaire will now be expanded to include other types of presbyopic correction. © 2007 British Contact Lens Association.
Resumo:
Testing whether an observed distribution of observations deviates from normality is a common type of statistical test available in statistics software. Most software offer two ways of judging whether there are significant deviations of the observed from the expected distributions, viz., chi-square and the KS test. These tests have different sensitivities and problems and often give conflicting results. The results of these tests together with observations of the shape of the observed distribution should be used to judge normality.
Resumo:
When testing the difference between two groups, if previous data indicate non-normality, then either transform the data if they comprise percentages, integers or scores or use a non-parametric test. If there is uncertainty whether the data are normally distributed, then deviations from normality are likely to be small if the data are measurements to three significant figures. Unless there is clear evidence that the distribution is non-normal, it is more efficient to use the conventional t-tests. It is poor statistical practice to carry out both the parametric and non-parametric tests on a set of data and then choose the result that is most convenient to the investigator!
Resumo:
There may be circumstances where it is necessary for microbiologists to compare variances rather than means, e,g., in analysing data from experiments to determine whether a particular treatment alters the degree of variability or testing the assumption of homogeneity of variance prior to other statistical tests. All of the tests described in this Statnote have their limitations. Bartlett’s test may be too sensitive but Levene’s and the Brown-Forsythe tests also have problems. We would recommend the use of the variance-ratio test to compare two variances and the careful application of Bartlett’s test if there are more than two groups. Considering that these tests are not particularly robust, it should be remembered that the homogeneity of variance assumption is usually the least important of those considered when carrying out an ANOVA. If there is concern about this assumption and especially if the other assumptions of the analysis are also not likely to be met, e.g., lack of normality or non additivity of treatment effects then it may be better either to transform the data or to carry out a non-parametric test on the data.
Resumo:
The state of the art in productivity measurement and analysis shows a gap between simple methods having little relevance in practice and sophisticated mathematical theory which is unwieldy for strategic and tactical planning purposes, -particularly at company level. An extension is made in this thesis to the method of productivity measurement and analysis based on the concept of added value, appropriate to those companies in which the materials, bought-in parts and services change substantially and a number of plants and inter-related units are involved in providing components for final assembly. Reviews and comparisons of productivity measurement dealing with alternative indices and their problems have been made and appropriate solutions put forward to productivity analysis in general and the added value method in particular. Based on this concept and method, three kinds of computerised models two of them deterministic, called sensitivity analysis and deterministic appraisal, and the third one, stochastic, called risk simulation, have been developed to cope with the planning of productivity and productivity growth with reference to the changes in their component variables, ranging from a single value 'to• a class interval of values of a productivity distribution. The models are designed to be flexible and can be adjusted according to the available computer capacity expected accuracy and 'presentation of the output. The stochastic model is based on the assumption of statistical independence between individual variables and the existence of normality in their probability distributions. The component variables have been forecasted using polynomials of degree four. This model is tested by comparisons of its behaviour with that of mathematical model using real historical data from British Leyland, and the results were satisfactory within acceptable levels of accuracy. Modifications to the model and its statistical treatment have been made as required. The results of applying these measurements and planning models to the British motor vehicle manufacturing companies are presented and discussed.
Resumo:
This study examined the use of non-standard parameters to investigate the visual field, with particular reference to the detection of glaucomatous visual field loss. Evaluation of the new perimetric strategy for threshold estimation - FASTPAC, demonstrated a reduction in the examination time of normals compared to the standard strategy. Despite an increased within-test variability the FASTPAC strategy produced a similar mean sensitivity to the standard strategy, reducing the effects of patient fatigue. The new technique of Blue-Yellow perimetry was compared to White-White perimetry for the detection of glaucomatous field loss in OHT and POAG. Using a database of normal subjects, confidence limits for normality were constructed to account for the increased between-subject variability with increase in age and eccentricity and for the greater variability of the Blue-Yellow field compared to the White-White field. Effects of individual ocular media absorption had little effect on Blue-Yellow field variability. Total and pattern probability analysis revealed five of 27 OHTs to exhibit Blue-Yellow focal abnormalities; two of these patients subsequently developed White-White loss. Twelve of the 24 POAGs revealed wider and/or deeper Blue-Yellow loss compared with the White-White field. Blue-Yellow perimetry showed good sensitivity and specificity characteristics, however, lack of perimetric experience and the presence of cataract influenced the Blue-Yellow visual field and may confound the interpretation of Blue-Yellow visual field loss. Visual field indices demonstrated a moderate relationship to the structural parameters of the optic nerve head using scanning laser tomography. No abnormalities in Blue-Yellow or Red-Green colour CS was apparent for the OHT patients. A greater vulnerability of the SWS pathway in glaucoma was demonstrated using Blue-Yellow perimetry however predicting which patients may benefit from B-Y perimetric examination is difficult. Furthermore, cataract and the extent of the field loss may limit the extent to which the integrity of the SWS channels can be selectively examined.
Resumo:
This study investigated the variability of response associated with various perimetric techniques, with the aim of improving the clinical interpretation of automated static threshold perirnetry. Evaluation of a third generation of perimetric threshold algorithms (SITA) demonstrated a reduction in test duration by approximately 50% both in normal subjects and in glaucoma patients. SITA produced a slightly higher, but clinically insignificant, Mean Sensitivity than with the previous generations of algorithms. This was associated with a decreased between-subject variability in sensitivity and hence, lower confidence intervals for normality. In glaucoma, the SITA algorithms gave rise to more statistically significant visual field defects and a similar between-visit repeatability to the Full Threshold and FASTPAC algorithms. The higher estimated sensitivity observed with SITA compared to Full Threshold and FASTPAC were not attributed to a reduction in the fatigue effect. The investigation of a novel method of maintaining patient fixation, a roving fixation target which paused immediately prior lo the stimulus presentation, revealed a greater degree of fixational instability with the roving fixation target compared to the conventional static fixation target. Previous experience with traditional white-white perimetry did not eradicate the learning effect in short-wavelength automated perimetry (SWAP) in a group of ocular hypertensive patients. The learning effect was smaller in an experienced group of patients compared to a naive group of patients, but was still at a significant level to require that patients should undertake a series of at least three familiarisation tests with SWAP.
Resumo:
This article focuses on the deviations from normality of stock returns before and after a financial liberalisation reform, and shows the extent to which inference based on statistical measures of stock market efficiency can be affected by not controlling for breaks. Drawing from recent advances in the econometrics of structural change, it compares the distribution of the returns of five East Asian emerging markets when breaks in the mean and variance are either (i) imposed using certain official liberalisation dates or (ii) detected non-parametrically using a data-driven procedure. The results suggest that measuring deviations from normality of stock returns with no provision for potentially existing breaks incorporates substantial bias. This is likely to severely affect any inference based on the corresponding descriptive or test statistics.
Resumo:
Purpose: To determine whether curve-fitting analysis of the ranked segment distributions of topographic optic nerve head (ONH) parameters, derived using the Heidelberg Retina Tomograph (HRT), provide a more effective statistical descriptor to differentiate the normal from the glaucomatous ONH. Methods: The sample comprised of 22 normal control subjects (mean age 66.9 years; S.D. 7.8) and 22 glaucoma patients (mean age 72.1 years; S.D. 6.9) confirmed by reproducible visual field defects on the Humphrey Field Analyser. Three 10°-images of the ONH were obtained using the HRT. The mean topography image was determined and the HRT software was used to calculate the rim volume, rim area to disc area ratio, normalised rim area to disc area ratio and retinal nerve fibre cross-sectional area for each patient at 10°-sectoral intervals. The values were ranked in descending order, and each ranked-segment curve of ordered values was fitted using the least squares method. Results: There was no difference in disc area between the groups. The group mean cup-disc area ratio was significantly lower in the normal group (0.204 ± 0.16) compared with the glaucoma group (0.533 ± 0.083) (p < 0.001). The visual field indices, mean deviation and corrected pattern S.D., were significantly greater (p < 0.001) in the glaucoma group (-9.09 dB ± 3.3 and 7.91 ± 3.4, respectively) compared with the normal group (-0.15 dB ± 0.9 and 0.95 dB ± 0.8, respectively). Univariate linear regression provided the best overall fit to the ranked segment data. The equation parameters of the regression line manually applied to the normalised rim area-disc area and the rim area-disc area ratio data, correctly classified 100% of normal subjects and glaucoma patients. In this study sample, the regression analysis of ranked segment parameters method was more effective than conventional ranked segment analysis, in which glaucoma patients were misclassified in approximately 50% of cases. Further investigation in larger samples will enable the calculation of confidence intervals for normality. These reference standards will then need to be investigated for an independent sample to fully validate the technique. Conclusions: Using a curve-fitting approach to fit ranked segment curves retains information relating to the topographic nature of neural loss. Such methodology appears to overcome some of the deficiencies of conventional ranked segment analysis, and subject to validation in larger scale studies, may potentially be of clinical utility for detecting and monitoring glaucomatous damage. © 2007 The College of Optometrists.
Resumo:
In the last decades we have seen a growing interest in research into children's own experiences and understandings of health and illness. This development, we would argue, is much stimulated by the sociology of childhood which has drawn our attention to how children as a social group are placed and perceived within the structure of society, and within inter-generational relations, as well as how children are social agents and co-constructors of their social world. Drawing on this tradition, we here address some cross-cutting themes that we think are important to further the study of child health: situating children within health policy, drawing attention to practices around children's health and well-being and a focus on children as health actors. The paper contributes to a critical analysis of child health policy and notions of child health and normality, pointing to theoretical and empirical research potential for the sociology of children's health and illness.
Resumo:
The goal of this paper is to model normal airframe conditions for helicopters in order to detect changes. This is done by inferring the flying state using a selection of sensors and frequency bands that are best for discriminating between different states. We used non-linear state-space models (NLSSM) for modelling flight conditions based on short-time frequency analysis of the vibration data and embedded the models in a switching framework to detect transitions between states. We then created a density model (using a Gaussian mixture model) for the NLSSM innovations: this provides a model for normal operation. To validate our approach, we used data with added synthetic abnormalities which was detected as low-probability periods. The model of normality gave good indications of faults during the flight, in the form of low probabilities under the model, with high accuracy (>92 %). © 2013 IEEE.
Resumo:
Two new methodologies are introduced to improve inference in the evaluation of mutual fund performance against benchmarks. First, the benchmark models are estimated using panel methods with both fund and time effects. Second, the non-normality of individual mutual fund returns is accounted for by using panel bootstrap methods. We also augment the standard benchmark factors with fund-specific characteristics, such as fund size. Using a dataset of UK equity mutual fund returns, we find that fund size has a negative effect on the average fund manager’s benchmark-adjusted performance. Further, when we allow for time effects and the non-normality of fund returns, we find that there is no evidence that even the best performing fund managers can significantly out-perform the augmented benchmarks after fund management charges are taken into account.