14 resultados para Immunologic Tests -- methods
em Aston University Research Archive
Resumo:
This article reviews the statistical methods that have been used to study the planar distribution, and especially clustering, of objects in histological sections of brain tissue. The objective of these studies is usually quantitative description, comparison between patients or correlation between histological features. Objects of interest such as neurones, glial cells, blood vessels or pathological features such as protein deposits appear as sectional profiles in a two-dimensional section. These objects may not be randomly distributed within the section but exhibit a spatial pattern, a departure from randomness either towards regularity or clustering. The methods described include simple tests of whether the planar distribution of a histological feature departs significantly from randomness using randomized points, lines or sample fields and more complex methods that employ grids or transects of contiguous fields, and which can detect the intensity of aggregation and the sizes, distribution and spacing of clusters. The usefulness of these methods in understanding the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Creutzfeldt-Jakob disease is discussed. © 2006 The Royal Microscopical Society.
Resumo:
Correlation and regression are two of the statistical procedures most widely used by optometrists. However, these tests are often misused or interpreted incorrectly, leading to erroneous conclusions from clinical experiments. This review examines the major statistical tests concerned with correlation and regression that are most likely to arise in clinical investigations in optometry. First, the use, interpretation and limitations of Pearson's product moment correlation coefficient are described. Second, the least squares method of fitting a linear regression to data and for testing how well a regression line fits the data are described. Third, the problems of using linear regression methods in observational studies, if there are errors associated in measuring the independent variable and for predicting a new value of Y for a given X, are discussed. Finally, methods for testing whether a non-linear relationship provides a better fit to the data and for comparing two or more regression lines are considered.
Resumo:
Background: Coronary heart disease patients have to learn to manage their condition to maximise quality of life and prevent recurrence or deterioration. They may develop their own informal methods of self-management in addition to the advice they receive as part of formal cardiac rehabilitation programmes. This study aimed to explore the use of complementary and alternative medicines and therapies (CAM), self-test kits and attitudes towards health of UK patients one year after referral to cardiac rehabilitation. Method: Questionnaire given to 463 patients attending an assessment clinic for 12 month follow up in four West Midlands hospitals. Results: 91.1% completed a questionnaire. 29.1% of patients used CAM and/or self-test kits for self-management but few (8.9%) used both methods. CAM was more often used for treating other illnesses than for CHD management. Self-test kit use (77.2%,) was more common than CAM (31.7%,) with BP monitors being the most prevalent (80.0%). Patients obtained self-test kits from a wide range of sources, for the most part (89.5%) purchased entirely on their own initiative. Predictors of self-management were post revascularisation status and higher scores on 'holism', 'rejection of authority' and 'individual responsibility'. Predictors of self-test kit use were higher `holism' and 'individual responsibility' scores. Conclusion: Patients are independently using new technologies to monitor their cardiovascular health, a role formerly carried out only by healthcare practitioners. Post-rehabilitation patients reported using CAM for self-management less frequently than they reported using self-test kits. Reports of CAM use were less frequent than in previous surveys of similar patient groups. Automatic assumptions cannot be made by clinicians about which CHD patients are most likely to self-manage. In order to increase trust and compliance it is important for doctors to encourage all CHD patients to disclose their self-management practices and to continue to address this in follow up consultations.
Resumo:
Background: Self-tests are those where an individual can obtain a result without recourse to a health professional, by getting a result immediately or by sending a sample to a laboratory that returns the result directly. Self-tests can be diagnostic, for disease monitoring, or both. There are currently tests for more than 20 different conditions available to the UK public, and self-testing is marketed as a way of alerting people to serious health problems so they can seek medical help. Almost nothing is known about the extent to which people self-test for cancer or why they do this. Self-tests for cancer could alter perceptions of risk and health behaviour, cause psychological morbidity and have a significant impact on the demand for healthcare. This study aims to gain an understanding of the frequency of self-testing for cancer and characteristics of users. Methods: Cross-sectional survey. Adults registered in participating general practices in the West Midlands Region, will be asked to complete a questionnaire that will collect socio-demographic information and basic data regarding previous and potential future use of self-test kits. The only exclusions will be people who the GP feels it would be inappropriate to send a questionnaire, for example because they are unable to give informed consent. Freepost envelopes will be included and non-responders will receive one reminder. Standardised prevalence rates will be estimated. Discussion: Cancer related self-tests, currently available from pharmacies or over the Internet, include faecal occult blood tests (related to bowel cancer), prostate specific antigen tests (related to prostate cancer), breast cancer kits (self examination guide) and haematuria tests (related to urinary tract cancers). The effect of an increase in self-testing for cancer is unknown but may be considerable: it may affect the delivery of population based screening programmes; empower patients or cause unnecessary anxiety; reduce costs on existing healthcare services or increase demand to investigate patients with positive test results. It is important that more is known about the characteristics of those who are using self-tests if we are to determine the potential impact on health services and the public. © 2006 Wilson et al; licensee BioMed Central Ltd.
Resumo:
In any investigation in optometry involving more that two treatment or patient groups, an investigator should be using ANOVA to analyse the results assuming that the data conform reasonably well to the assumptions of the analysis. Ideally, specific null hypotheses should be built into the experiment from the start so that the treatments variation can be partitioned to test these effects directly. If 'post-hoc' tests are used, then an experimenter should examine the degree of protection offered by the test against the possibilities of making either a type 1 or a type 2 error. All experimenters should be aware of the complexity of ANOVA. The present article describes only one common form of the analysis, viz., that which applies to a single classification of the treatments in a randomised design. There are many different forms of the analysis each of which is appropriate to the analysis of a specific experimental design. The uses of some of the most common forms of ANOVA in optometry have been described in a further article. If in any doubt, an investigator should consult a statistician with experience of the analysis of experiments in optometry since once embarked upon an experiment with an unsuitable design, there may be little that a statistician can do to help.
Resumo:
1. Pearson's correlation coefficient only tests whether the data fit a linear model. With large numbers of observations, quite small values of r become significant and the X variable may only account for a minute proportion of the variance in Y. Hence, the value of r squared should always be calculated and included in a discussion of the significance of r. 2. The use of r assumes that a bivariate normal distribution is present and this assumption should be examined prior to the study. If Pearson's r is not appropriate, then a non-parametric correlation coefficient such as Spearman's rs may be used. 3. A significant correlation should not be interpreted as indicating causation especially in observational studies in which there is a high probability that the two variables are correlated because of their mutual correlations with other variables. 4. In studies of measurement error, there are problems in using r as a test of reliability and the ‘intra-class correlation coefficient’ should be used as an alternative. A correlation test provides only limited information as to the relationship between two variables. Fitting a regression line to the data using the method known as ‘least square’ provides much more information and the methods of regression and their application in optometry will be discussed in the next article.
Resumo:
The key to the correct application of ANOVA is careful experimental design and matching the correct analysis to that design. The following points should therefore, be considered before designing any experiment: 1. In a single factor design, ensure that the factor is identified as a 'fixed' or 'random effect' factor. 2. In more complex designs, with more than one factor, there may be a mixture of fixed and random effect factors present, so ensure that each factor is clearly identified. 3. Where replicates can be grouped or blocked, the advantages of a randomised blocks design should be considered. There should be evidence, however, that blocking can sufficiently reduce the error variation to counter the loss of DF compared with a randomised design. 4. Where different treatments are applied sequentially to a patient, the advantages of a three-way design in which the different orders of the treatments are included as an 'effect' should be considered. 5. Combining different factors to make a more efficient experiment and to measure possible factor interactions should always be considered. 6. The effect of 'internal replication' should be taken into account in a factorial design in deciding the number of replications to be used. Where possible, each error term of the ANOVA should have at least 15 DF. 7. Consider carefully whether a particular factorial design can be considered to be a split-plot or a repeated measures design. If such a design is appropriate, consider how to continue the analysis bearing in mind the problem of using post hoc tests in this situation.
Resumo:
An apparatus was developed to project spinning golf balls directly onto golf greens. This employed a modified baseball/practice machine with two counter-rotating pneumatic wheels. The speed of the wheels could be varied independently allowing backspin to be given to the ball. The ball was projected into a darkened enclosure where the motion of the ball before and after impacting with the turf was recorded using a still camera and a stroboscope. The resulting photographs contained successive images of the ball on a single frame of film. The apparatus was tested on eighteen golf courses resulting in 721 photographs of impacts. Statistical analysis was carried out on the results of the photographs and from this, two types of green emerged. On the first, the ball tended to rebound with topspin, while on the second, the ball retained backspin after impact if the initial backspin was greater than about 350 rads-1. Eleven tests were devised to determine the characteristics of greens and statistical techniques were used to analyse the relationships between these tests. These showed the effects of the green characteristics on ball/turf impacts. It was found that the ball retained backspin on greens that were freely drained and had less than 60% of Poa annua (annual meadow grass) in their swards. Visco-elastic models were used to simulate the impact of the ball with the turf. Impacts were simulated by considering the ball to be rigid and the turf to be a two layered system consisting of springs and dampers. The model showed good agreement with experiment and was used to simulate impacts from two different shots onto two contrasting types of green.
Resumo:
A combination of experimental methods was applied at a clogged, horizontal subsurface flow (HSSF) municipal wastewater tertiary treatment wetland (TW) in the UK, to quantify the extent of surface and subsurface clogging which had resulted in undesirable surface flow. The three dimensional hydraulic conductivity profile was determined, using a purpose made device which recreates the constant head permeameter test in-situ. The hydrodynamic pathways were investigated by performing dye tracing tests with Rhodamine WT and a novel multi-channel, data-logging, flow through Fluorimeter which allows synchronous measurements to be taken from a matrix of sampling points. Hydraulic conductivity varied in all planes, with the lowest measurement of 0.1 md1 corresponding to the surface layer at the inlet, and the maximum measurement of 1550 md1 located at a 0.4m depth at the outlet. According to dye tracing results, the region where the overland flow ceased received five times the average flow, which then vertically short-circuited below the rhizosphere. The tracer break-through curve obtained from the outlet showed that this preferential flow-path accounted for approximately 80% of the flow overall and arrived 8 h before a distinctly separate secondary flow-path. The overall volumetric efficiencyof the clogged system was 71% and the hydrology was simulated using a dual-path, dead-zone storage model. It is concluded that uneven inlet distribution, continuous surface loading and high rhizosphere resistance is responsible for the clog formation observed in this system. The average inlet hydraulic conductivity was 2 md1, suggesting that current European design guidelines, which predict that the system will reach an equilibrium hydraulic conductivity of 86 md1, do not adequately describe the hydrology of mature systems.
Resumo:
Objective: To assess the accuracy and acceptability of polymerase chain reaction (PCR) and optical immunoassay (OIA) tests for the detection of maternal group B streptococcus (GBS) colonisation during labour, comparing their performance with the current UK policy of risk factor-based screening. Design Diagnostic test accuracy study. Setting and population Fourteen hundred women in labour at two large UK maternity units provided vaginal and rectal swabs for testing. Methods The PCR and OIA index tests were compared with the reference standard of selective enriched culture, assessed blind to index tests. Factors influencing neonatal GBS colonisation were assessed using multiple logistic regression, adjusting for antibiotic use. The acceptability of testing to participants was evaluated through a structured questionnaire administered after delivery. Main outcome measures The sensitivity and specificity of PCR, OIA and risk factor-based screening. Results Maternal GBS colonisation was 21% (19-24%) by combined vaginal and rectal swab enriched culture. PCR test of either vaginal or rectal swabs was more sensitive (84% [79-88%] versus 72% [65-77%]) and specific (87% [85-89%] versus 57% [53-60%]) than OIA (P <0.001), and far more sensitive (84 versus 30% [25-35%]) and specific (87 versus 80% [77-82%]) than risk factor-based screening (P <0.001). Maternal antibiotics (odds ratio, 0.22 [0.07-0.62]; P = 0.004) and a positive PCR test (odds ratio, 29.4 [15.8-54.8]; P <0.001) were strongly related to neonatal GBS colonisation, whereas risk factors were not (odds ratio, 1.44 [0.80-2.62]; P = 0.2). Conclusion Intrapartum PCR screening is a more accurate predictor of maternal and neonatal GBS colonisation than is OIA or risk factor-based screening, and is acceptable to women. © RCOG 2010 BJOG An International Journal of Obstetrics and Gynaecology.
Resumo:
Background/Aims: To develop and assess the psychometric validity of a Chinese language Vision Health related quality-of-life (VRQoL) measurement instrument for the Chinese visually impaired. Methods: The Low Vision Quality of Life Questionnaire (LVQOL) was translated and adapted into the Chinese-version Low Vision Quality of Life Questionnaire (CLVQOL). The CLVQOL was completed by 100 randomly selected people with low vision (primary group) and 100 people with normal vision (control group). Ninety-four participants from the primary group completed the CLVQOL a second time 2 weeks later (test-retest group). The internal consistency reliability, test-retest reliability, item-internal consistency, item-discrimination validity, construct validity and discriminatory power of the CLVQOL were calculated. Results: The review committee agreed that the CLVQOL replicated the meaning of the LVQOL and was sensitive to cultural differences. The Cronbach's α coefficient and the split-half coefficient for the four scales and total CLVQOL scales were 0.75-0.97. The test-retest reliability as estimated by the intraclass correlations coefficient was 0.69-0.95. Item-internal consistency was >0.4 and item-discrimination validity was generally <0.40. The Varimax rotation factor analysis of the CLVQOL identified four principal factors. the quality-of-life rating of four subscales and the total score of the CLVQOL of the primary group were lower than those of the Control group, both in hospital-based subjects and community-based subjects. Conclusion: The CLVQOL Chinese is a culturally specific vision-related quality-of-life measure instrument. It satisfies conventional psychometric criteria, discriminates visually healthy populations from low vision patients and may be valuable in screening the local community as well as for use in clinical practice or research. © Springer 2005.
Resumo:
A sizeable amount of the testing in eye care, requires either the identification of targets such as letters to assess functional vision, or the subjective evaluation of imagery by an examiner. Computers can render a variety of different targets on their monitors and can be used to store and analyse ophthalmic images. However, existing computing hardware tends to be large, screen resolutions are often too low, and objective assessments of ophthalmic images unreliable. Recent advances in mobile computing hardware and computer-vision systems can be used to enhance clinical testing in optometry. High resolution touch screens embedded in mobile devices, can render targets at a wide variety of distances and can be used to record and respond to patient responses, automating testing methods. This has opened up new opportunities in computerised near vision testing. Equally, new image processing techniques can be used to increase the validity and reliability of objective computer vision systems. Three novel apps for assessing reading speed, contrast sensitivity and amplitude of accommodation were created by the author to demonstrate the potential of mobile computing to enhance clinical measurement. The reading speed app could present sentences effectively, control illumination and automate the testing procedure for reading speed assessment. Meanwhile the contrast sensitivity app made use of a bit stealing technique and swept frequency target, to rapidly assess a patient’s full contrast sensitivity function at both near and far distances. Finally, customised electronic hardware was created and interfaced to an app on a smartphone device to allow free space amplitude of accommodation measurement. A new geometrical model of the tear film and a ray tracing simulation of a Placido disc topographer were produced to provide insights on the effect of tear film breakdown on ophthalmic images. Furthermore, a new computer vision system, that used a novel eye-lash segmentation technique, was created to demonstrate the potential of computer vision systems for the clinical assessment of tear stability. Studies undertaken by the author to assess the validity and repeatability of the novel apps, found that their repeatability was comparable to, or better, than existing clinical methods for reading speed and contrast sensitivity assessment. Furthermore, the apps offered reduced examination times in comparison to their paper based equivalents. The reading speed and amplitude of accommodation apps correlated highly with existing methods of assessment supporting their validity. Their still remains questions over the validity of using a swept frequency sine-wave target to assess patient’s contrast sensitivity functions as no clinical test provides the range of spatial frequencies and contrasts, nor equivalent assessment at distance and near. A validation study of the new computer vision system found that the authors tear metric correlated better with existing subjective measures of tear film stability than those of a competing computer-vision system. However, repeatability was poor in comparison to the subjective measures due to eye lash interference. The new mobile apps, computer vision system, and studies outlined in this thesis provide further insight into the potential of applying mobile and image processing technology to enhance clinical testing by eye care professionals.
Resumo:
Clogging is the main operational problem associated with horizontal subsurface flow constructed wetlands (HSSF CWs). The measurement of saturated hydraulic conductivity has proven to be a suitable technique to assess clogging within HSSF CWs. The vertical and horizontal distribution of hydraulic conductivity was assessed in two full-scale HSSF CWs by using two different in situ permeameter methods (falling head (FH) and constant head (CH) methods). Horizontal hydraulic conductivity profiles showed that both methods are correlated by a power function (FH= CH 0.7821, r 2=0.76) within the recorded range of hydraulic conductivities (0-70 m/day). However, the FH method provided lower values of hydraulic conductivity than the CH method (one to three times lower). Despite discrepancies between the magnitudes of reported readings, the relative distribution of clogging obtained via both methods was similar. Therefore, both methods are useful when exploring the general distribution of clogging and, specially, the assessment of clogged areas originated from preferential flow paths within full-scale HSSF CWs. Discrepancy between methods (either in magnitude and pattern) aroused from the vertical hydraulic conductivity profiles under highly clogged conditions. It is believed this can be attributed to procedural differences between the methods, such as the method of permeameter insertion (twisting versus hammering). Results from both methods suggest that clogging develops along the shortest distance between water input and output. Results also evidence that the design and maintenance of inlet distributors and outlet collectors appear to have a great influence on the pattern of clogging, and hence the asset lifetime of HSSF CWs. © Springer Science+Business Media B.V. 2011.
Resumo:
This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.