858 resultados para Basophil Degranulation Test -- methods
Resumo:
Prices and yields of UK government zero-coupon bonds are used to test alternative yield curve estimation models. Zero-coupon bonds permit a more pure comparison, as the models are providing only the interpolation service and also not making estimation feasible. It is found that better yield curves estimates are obtained by fitting to the yield curve directly rather than fitting first to the discount function. A simple procedure to set the smoothness of the fitted curves is developed, and a positive relationship between oversmoothness and the fitting error is identified. A cubic spline function fitted directly to the yield curve provides the best overall balance of fitting error and smoothness, both along the yield curve and within local maturity regions.
Resumo:
A system for the NDI' testing of the integrity of conposite materials and of adhesive bonds has been developed to meet industrial requirements. The vibration techniques used were found to be applicable to the development of fluid measuring transducers. The vibrational spectra of thin rectangular bars were used for the NDT work. A machined cut in a bar had a significant effect on the spectrum but a genuine crack gave an unambiguous response at high amplitudes. This was the generation of fretting crack noise at frequencies far above that of the drive. A specially designed vibrational decrement meter which, in effect, measures mechanical energy loss enabled a numerical classification of material adhesion to be obtained. This was used to study bars which had been flame or plasma sprayed with a variety of materials. It has become a useful tool in optimising coating methods. A direct industrial application was to classify piston rings of high performance I.C. engines. Each consists of a cast iron ring with a channel into which molybdenum, a good bearing surface, is sprayed. The NDT classification agreed quite well with the destructive test normally used. The techniques and equipment used for the NOT work were applied to the development of the tuning fork transducers investigated by Hassan into commercial density and viscosity devices. Using narrowly spaced, large area tines a thin lamina of fluid is trapped between them. It stores a large fraction of the vibrational energy which, acting as an inertia load reduces the frequency. Magnetostrictive and piezoelectric effects together or in combination enable the fork to be operated through a flange. This allows it to be used in pipeline or 'dipstick' applications. Using a different tine geometry the viscosity loading can be predoninant. This as well as the signal decrement of the density transducer makes a practical viscometer.
Resumo:
Background: Age-related macular degeneration (ARMD) is the leading cause of visual disability in people over 60 years of age in the developed world. The success of treatment deteriorates with increased latency of diagnosis. The purpose of this study was to determine the reliability of the macular mapping test (MMT), and to investigate its potential as a screening tool. Methods: The study population comprised of 31 healthy eyes of 31 participants. To assess reliability, four macular mapping test (MMT) measurements were taken in two sessions separated by one hour by two practitioners, with reversal of order in the second session. MMT readings were also taken from 17 age-related maculopathy (ARM), and 12 AMD affected eyes. Results: For the normal cohort, average MMT scores ranged from 85.5 to 100.0 MMT points. Scores ranged from 79.0 to 99.0 for the ARM group and from 9.0 to 92.0 for the AMD group. MMT scores were reliable to within ± 7.0 points. The difference between AMD affected eyes and controls (z = 3.761, p = < 0.001) was significant. The difference between ARM affected eyes and controls was not significant (z = -0.216, p = 0.829). Conclusion: The reliability data shows that a change of 14 points or more is required to indicate a clinically significant change. This value is required for use of the MMT as an outcome measure in clinical trials. Although there was no difference between MMT scores from ARM affected eyes and controls, the MMT has the advantage over the Amsler grid in that it uses a letter target, has a peripheral fixation aid, and it provides a numerical score. This score could be beneficial in office and home monitoring of AMD progression, as well as an outcome measure in clinical research. © 2005 Bartlett et al; licensee BioMed Central Ltd.
Resumo:
In this second article, statistical ideas are extended to the problem of testing whether there is a true difference between two samples of measurements. First, it will be shown that the difference between the means of two samples comes from a population of such differences which is normally distributed. Second, the 't' distribution, one of the most important in statistics, will be applied to a test of the difference between two means using a simple data set drawn from a clinical experiment in optometry. Third, in making a t-test, a statistical judgement is made as to whether there is a significant difference between the means of two samples. Before the widespread use of statistical software, this judgement was made with reference to a statistical table. Even if such tables are not used, it is useful to understand their logical structure and how to use them. Finally, the analysis of data, which are known to depart significantly from the normal distribution, will be described.
Resumo:
In any investigation in optometry involving more that two treatment or patient groups, an investigator should be using ANOVA to analyse the results assuming that the data conform reasonably well to the assumptions of the analysis. Ideally, specific null hypotheses should be built into the experiment from the start so that the treatments variation can be partitioned to test these effects directly. If 'post-hoc' tests are used, then an experimenter should examine the degree of protection offered by the test against the possibilities of making either a type 1 or a type 2 error. All experimenters should be aware of the complexity of ANOVA. The present article describes only one common form of the analysis, viz., that which applies to a single classification of the treatments in a randomised design. There are many different forms of the analysis each of which is appropriate to the analysis of a specific experimental design. The uses of some of the most common forms of ANOVA in optometry have been described in a further article. If in any doubt, an investigator should consult a statistician with experience of the analysis of experiments in optometry since once embarked upon an experiment with an unsuitable design, there may be little that a statistician can do to help.
Resumo:
1. Pearson's correlation coefficient only tests whether the data fit a linear model. With large numbers of observations, quite small values of r become significant and the X variable may only account for a minute proportion of the variance in Y. Hence, the value of r squared should always be calculated and included in a discussion of the significance of r. 2. The use of r assumes that a bivariate normal distribution is present and this assumption should be examined prior to the study. If Pearson's r is not appropriate, then a non-parametric correlation coefficient such as Spearman's rs may be used. 3. A significant correlation should not be interpreted as indicating causation especially in observational studies in which there is a high probability that the two variables are correlated because of their mutual correlations with other variables. 4. In studies of measurement error, there are problems in using r as a test of reliability and the ‘intra-class correlation coefficient’ should be used as an alternative. A correlation test provides only limited information as to the relationship between two variables. Fitting a regression line to the data using the method known as ‘least square’ provides much more information and the methods of regression and their application in optometry will be discussed in the next article.
Resumo:
1. Fitting a linear regression to data provides much more information about the relationship between two variables than a simple correlation test. A goodness of fit test of the line should always be carried out. Hence, r squared estimates the strength of the relationship between Y and X, ANOVA whether a statistically significant line is present, and the ‘t’ test whether the slope of the line is significantly different from zero. 2. Always check whether the data collected fit the assumptions for regression analysis and, if not, whether a transformation of the Y and/or X variables is necessary. 3. If the regression line is to be used for prediction, it is important to determine whether the prediction involves an individual y value or a mean. Care should be taken if predictions are made close to the extremities of the data and are subject to considerable error if x falls beyond the range of the data. Multiple predictions require correction of the P values. 3. If several individual regression lines have been calculated from a number of similar sets of data, consider whether they should be combined to form a single regression line. 4. If the data exhibit a degree of curvature, then fitting a higher-order polynomial curve may provide a better fit than a straight line. In this case, a test of whether the data depart significantly from a linear regression should be carried out.
Resumo:
1. The techniques associated with regression, whether linear or non-linear, are some of the most useful statistical procedures that can be applied in clinical studies in optometry. 2. In some cases, there may be no scientific model of the relationship between X and Y that can be specified in advance and the objective may be to provide a ‘curve of best fit’ for predictive purposes. In such cases, the fitting of a general polynomial type curve may be the best approach. 3. An investigator may have a specific model in mind that relates Y to X and the data may provide a test of this hypothesis. Some of these curves can be reduced to a linear regression by transformation, e.g., the exponential and negative exponential decay curves. 4. In some circumstances, e.g., the asymptotic curve or logistic growth law, a more complex process of curve fitting involving non-linear estimation will be required.
Resumo:
The timeline imposed by recent worldwide chemical legislation is not amenable to conventional in vivo toxicity testing, requiring the development of rapid, economical in vitro screening strategies which have acceptable predictive capacities. When acquiring regulatory neurotoxicity data, distinction on whether a toxic agent affects neurons and/or astrocytes is essential. This study evaluated neurofilament (NF) and glial fibrillary acidic protein (GFAP) directed single-cell (S-C) ELISA and flow cytometry as methods for distinguishing cell-specific cytoskeletal responses, using the established human NT2 neuronal/astrocytic (NT2.N/A) co-culture model and a range of neurotoxic (acrylamide, atropine, caffeine, chloroquine, nicotine) and non-neurotoxic (chloramphenicol, rifampicin, verapamil) test chemicals. NF and GFAP directed flow cytometry was able to identify several of the test chemicals as being specifically neurotoxic (chloroquine, nicotine) or astrocytoxic (atropine, chloramphenicol) via quantification of cell death in the NT2.N/A model at cytotoxic concentrations using the resazurin cytotoxicity assay. Those neurotoxicants with low associated cytotoxicity are the most significant in terms of potential hazard to the human nervous system. The NF and GFAP directed S-C ELISA data predominantly demonstrated the known neurotoxicants only to affect the neuronal and/or astrocytic cytoskeleton in the NT2.N/A cell model at concentrations below those affecting cell viability. This report concluded that NF and GFAP directed S-C ELISA and flow cytometric methods may prove to be valuable additions to an in vitro screening strategy for differentiating cytotoxicity from specific neuronal and/or astrocytic toxicity. Further work using the NT2.N/A model and a broader array of toxicants is appropriate in order to confirm the applicability of these methods.
Resumo:
A combination of experimental methods was applied at a clogged, horizontal subsurface flow (HSSF) municipal wastewater tertiary treatment wetland (TW) in the UK, to quantify the extent of surface and subsurface clogging which had resulted in undesirable surface flow. The three dimensional hydraulic conductivity profile was determined, using a purpose made device which recreates the constant head permeameter test in-situ. The hydrodynamic pathways were investigated by performing dye tracing tests with Rhodamine WT and a novel multi-channel, data-logging, flow through Fluorimeter which allows synchronous measurements to be taken from a matrix of sampling points. Hydraulic conductivity varied in all planes, with the lowest measurement of 0.1 md1 corresponding to the surface layer at the inlet, and the maximum measurement of 1550 md1 located at a 0.4m depth at the outlet. According to dye tracing results, the region where the overland flow ceased received five times the average flow, which then vertically short-circuited below the rhizosphere. The tracer break-through curve obtained from the outlet showed that this preferential flow-path accounted for approximately 80% of the flow overall and arrived 8 h before a distinctly separate secondary flow-path. The overall volumetric efficiencyof the clogged system was 71% and the hydrology was simulated using a dual-path, dead-zone storage model. It is concluded that uneven inlet distribution, continuous surface loading and high rhizosphere resistance is responsible for the clog formation observed in this system. The average inlet hydraulic conductivity was 2 md1, suggesting that current European design guidelines, which predict that the system will reach an equilibrium hydraulic conductivity of 86 md1, do not adequately describe the hydrology of mature systems.
Resumo:
Evaluation and benchmarking in content-based image retrieval has always been a somewhat neglected research area, making it difficult to judge the efficacy of many presented approaches. In this paper we investigate the issue of benchmarking for colour-based image retrieval systems, which enable users to retrieve images from a database based on lowlevel colour content alone. We argue that current image retrieval evaluation methods are not suited to benchmarking colour-based image retrieval systems, due in main to not allowing users to reflect upon the suitability of retrieved images within the context of a creative project and their reliance on highly subjective ground-truths. As a solution to these issues, the research presented here introduces the Mosaic Test for evaluating colour-based image retrieval systems, in which test-users are asked to create an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. We report on our findings from a user study which suggests that the Mosaic Test overcomes the major drawbacks associated with existing image retrieval evaluation methods, by enabling users to reflect upon image selections and automatically measuring image relevance in a way that correlates with the perception of many human assessors. We therefore propose that the Mosaic Test be adopted as a standardised benchmark for evaluating and comparing colour-based image retrieval systems.
Resumo:
Objectives - Powdered and granulated particulate materials make up most of the ingredients of pharmaceuticals and are often at risk of undergoing unwanted agglomeration, or caking, during transport or storage. This is particularly acute when bulk powders are exposed to extreme swings in temperature and relative humidity, which is now common as drugs are produced and administered in increasingly hostile climates and are stored for longer periods of time prior to use. This study explores the possibility of using a uniaxial unconfined compression test to compare the strength of caked agglomerates exposed to different temperatures and relative humidities. This is part of a longer-term study to construct a protocol to predict the caking tendency of a new bulk material from individual particle properties. The main challenge is to develop techniques that provide repeatable results yet are presented simply enough to be useful to a wide range of industries. Methods - Powdered sucrose, a major pharmaceutical ingredient, was poured into a split die and exposed to high and low relative humidity cycles at room temperature. The typical ranges were 20–30% for the lower value and 70–80% for the higher value. The outer die casing was then removed and the resultant agglomerate was subjected to an unconfined compression test using a plunger fitted to a Zwick compression tester. The force against displacement was logged so that the dynamics of failure as well as the failure load of the sample could be recorded. The experimental matrix included varying the number of cycles, the amount between the maximum and minimum relative humidity, the height and diameters of the samples, the number of cycles and the particle size. Results - Trends showed that the tensile strength of the agglomerates increased with the number of cycles and also with the more extreme swings in relative humidity. This agrees with previous work on alternative methods of measuring the tensile strength of sugar agglomerates formed from humidity cycling (Leaper et al 2003). Conclusions - The results show that at the very least the uniaxial tester is a good comparative tester to examine the caking tendency of powdered materials, with a simple arrangement and operation that are compatible with the requirements of industry. However, further work is required to continue to optimize the height/ diameter ratio during tests.
Resumo:
In this paper, we discuss some practical implications for implementing adaptable network algorithms applied to non-stationary time series problems. Two real world data sets, containing electricity load demands and foreign exchange market prices, are used to test several different methods, ranging from linear models with fixed parameters, to non-linear models which adapt both parameters and model order on-line. Training with the extended Kalman filter, we demonstrate that the dynamic model-order increment procedure of the resource allocating RBF network (RAN) is highly sensitive to the parameters of the novelty criterion. We investigate the use of system noise for increasing the plasticity of the Kalman filter training algorithm, and discuss the consequences for on-line model order selection. The results of our experiments show that there are advantages to be gained in tracking real world non-stationary data through the use of more complex adaptive models.
Resumo:
A variety of content-based image retrieval systems exist which enable users to perform image retrieval based on colour content - i.e., colour-based image retrieval. For the production of media for use in television and film, colour-based image retrieval is useful for retrieving specifically coloured animations, graphics or videos from large databases (by comparing user queries to the colour content of extracted key frames). It is also useful to graphic artists creating realistic computer-generated imagery (CGI). Unfortunately, current methods for evaluating colour-based image retrieval systems have 2 major drawbacks. Firstly, the relevance of images retrieved during the task cannot be measured reliably. Secondly, existing methods do not account for the creative design activity known as reflection-in-action. Consequently, the development and application of novel and potentially more effective colour-based image retrieval approaches, better supporting the large number of users creating media for use in television and film productions, is not possible as their efficacy cannot be reliably measured and compared to existing technologies. As a solution to the problem, this paper introduces the Mosaic Test. The Mosaic Test is a user-based evaluation approach in which participants complete an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. In this paper, we introduce the Mosaic Test and report on a user evaluation. The findings of the study reveal that the Mosaic Test overcomes the 2 major drawbacks associated with existing evaluation methods and does not require expert participants. © 2012 Springer Science+Business Media, LLC.
Resumo:
Background - The PELICAN Multidisciplinary Team Total Mesorectal Excision (MDT-TME) Development Programme aimed to improve clinical outcomes for rectal cancer by educating colorectal cancer teams in precision surgery and related aspects of multidisciplinary care. The Programme reached almost all colorectal cancer teams across England. We took the opportunity to assess the impact of participating in this novel team-based Development Programme on the working lives of colorectal cancer team members. Methods - The impact of participating in the programme on team members' self-reported job stress, job satisfaction and team performance was assessed in a pre-post course study. 333/568 (59%) team members, from the 75 multidisciplinary teams who attended the final year of the Programme, completed questionnaires pre-course, and 6-8 weeks post-course. Results - Across all team members, the main sources of job satisfaction related to working in multidisciplinary teams; whilst feeling overloaded was the main source of job stress. Surgeons and clinical nurse specialists reported higher levels of job satisfaction than team members who do not provide direct patient care, whilst MDT coordinators reported the lowest levels of job satisfaction and job stress. Both job stress and satisfaction decreased after participating in the Programme for all team members. There was a small improvement in team performance. Conclusions - Participation in the Development Programme had a mixed impact on the working lives of team members in the immediate aftermath of attending. The decrease in team members' job stress may reflect the improved knowledge and skills conferred by the Programme. The decrease in job satisfaction may be the consequence of being unable to apply these skills immediately in clinical practice because of a lack of required infrastructure and/or equipment. In addition, whilst the Programme raised awareness of the challenges of teamworking, a greater focus on tackling these issues may have improved working lives further.