938 resultados para Basophil Degranulation Test -- methods
Resumo:
In this paper, numerical simulations are used in an attempt to find optimal Source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B-1-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.
Resumo:
Objectives: The aim of this study was to assess the consistency and performance of radiologists interpreting breast magnetic resonance imaging (MRI) examinations. Materials and Methods: Two test sets of eight cases comprising cancers, benign disease, technical problems and parenchymal enhancement were prepared from two manufacturers' equipment (X and Y) and reported by 15 radiologists using the recording form and scoring system of the UK MRI breast screening study [(MAgnetic Resonance Imaging in Breast Screening (MARIBS)]. Variations in assessments of morphology, kinetic scores and diagnosis were measured by assessing intraobserver and interobserver variability and agreement. The sensitivity and specificity of reporting performances was determined using receiver operating characteristic (ROC) curve analysis. Results: Intraobserver variation was seen in 13 (27.7%) of 47 of the radiologists' conclusions (four technical and seven pathological differences). Substantial interobserver variation was observed in the scores recorded for morphology, pattern of enhancement, quantification of enhancement and washout pattern. The overall sensitivity of breast MRI was high [88.6%, 95% confidence interval (CI) 77.4-94.7%], combined with a specificity of 69.2% (95% CI 60.5-76.7%). The sensitivities were similar for the two test sets (P=.3), but the specificity was significantly higher for the Manufacturer X dataset (P
Resumo:
Objective: This paper compares four techniques used to assess change in neuropsychological test scores before and after coronary artery bypass graft surgery (CABG), and includes a rationale for the classification of a patient as overall impaired. Methods: A total of 55 patients were tested before and after surgery on the MicroCog neuropsychological test battery. A matched control group underwent the same testing regime to generate test–retest reliabilities and practice effects. Two techniques designed to assess statistical change were used: the Reliable Change Index (RCI), modified for practice, and the Standardised Regression-based (SRB) technique. These were compared against two fixed cutoff techniques (standard deviation and 20% change methods). Results: The incidence of decline across test scores varied markedly depending on which technique was used to describe change. The SRB method identified more patients as declined on most measures. In comparison, the two fixed cutoff techniques displayed relatively reduced sensitivity in the detection of change. Conclusions: Overall change in an individual can be described provided the investigators choose a rational cutoff based on likely spread of scores due to chance. A cutoff value of ≥20% of test scores used provided acceptable probability based on the number of tests commonly encountered. Investigators must also choose a test battery that minimises shared variance among test scores.
Resumo:
Objective: To validate the unidimensionality of the Action Research Arm Test (ARAT) using Mokken analysis and to examine whether scores of the ARAT can be transformed into interval scores using Rasch analysis. Subjects and methods: A total of 351 patients with stroke were recruited from 5 rehabilitation departments located in 4 regions of Taiwan. The 19-item ARAT was administered to all the subjects by a physical therapist. The data were analysed using item response theory by non-parametric Mokken analysis followed by Rasch analysis. Results: The results supported a unidimensional scale of the 19-item ARAT by Mokken analysis, with the scalability coefficient H = 0.95. Except for the item pinch ball bearing 3rd finger and thumb'', the remaining 18 items have a consistently hierarchical order along the upper extremity function's continuum. In contrast, the Rasch analysis, with a stepwise deletion of misfit items, showed that only 4 items (grasp ball'', grasp block 5 cm(3)'', grasp block 2.5 cm(3)'', and grip tube 1 cm(3)'') fit the Rasch rating scale model's expectations. Conclusion: Our findings indicated that the 19-item ARAT constituted a unidimensional construct measuring upper extremity function in stroke patients. However, the results did not support the premise that the raw sum scores of the ARAT can be transformed into interval Rasch scores. Thus, the raw sum scores of the ARAT can provide information only about order of patients on their upper extremity functional abilities, but not represent each patient's exact functioning.
Resumo:
Posttransplantation diabetes (PTD) contributes to cardiovascular disease and graft loss in renal transplant recipients (RTR). Current recommendations advise fasting blood glucose (FBG) as the screening and diagnostic test of choice for PTD. This study sought to determine (1) the predictive power of FBG with respect to 2-h blood glucose (2HBG) and (2) the prevalence of PTD using FBG and 2HBG compared with that using FBG alone, in prevalent RTR. A total of 200 RTR (mean age 52 yr; 59% male; median transplant duration 6.6 yr) who were >6 mo posttransplantation and had no known history of diabetes were studied. Patients with FBG
Resumo:
Primary objectives: (1) To investigate the Nonword Repetition test (NWR) as an index of sub-vocal rehearsal deficits after mild traumatic brain injury (mTBI); (2) to assess the reliability, validity and sensitivity of the NWR; and (3) to compare the NWR to more sensitive tests of verbal memory. Research design: An independent groups design. Methods and procedures: Study 1 administered the NWR to 46 mTBI and 61 uninjured controls with the Rapid Screen of Concussion (RSC). Study 2 compared mTBI, orthopaedic and uninjured participants on the NWR and the Hopkins Verbal Learning Test (HVLT-R). Main outcomes and results: The NWR did not improve the diagnostic accuracy of the RSC. However, it is reliable and indexes sub-vocal rehearsal speed. These findings provide evidence that although the current form of the NWR lacks sensitivity to the impact of mTBI, the development of a more sensitive test of sub-vocal rehearsal deficits following mTBI is warranted.
Resumo:
The purpose of this paper was to evaluate the psychometric properties of a stage-specific selfefficacy scale for physical activity with classical test theory (CTT), confirmatory factor analysis (CFA) and item response modeling (IRM). Women who enrolled in the Women On The Move study completed a 20-item stage-specific self-efficacy scale developed for this study [n = 226, 51.1% African-American and 48.9% Hispanic women, mean age = 49.2 (67.0) years, mean body mass index = 29.7 (66.4)]. Three analyses were conducted: (i) a CTT item analysis, (ii) a CFA to validate the factor structure and (iii) an IRM analysis. The CTT item analysis and the CFA results showed that the scale had high internal consistency (ranging from 0.76 to 0.93) and a strong factor structure. Results also showed that the scale could be improved by modifying or eliminating some of the existing items without significantly altering the content of the scale. The IRM results also showed that the scale had few items that targeted high self-efficacy and the stage-specific assumption underlying the scale was rejected. In addition, the IRM analyses found that the five-point response format functioned more like a four-point response format. Overall, employing multiple methods to assess the psychometric properties of the stage-specific self-efficacy scale demonstrated the complimentary nature of these methods and it highlighted the strengths and weaknesses of this scale.
Resumo:
We are developing a telemedicine application which offers automated diagnosis of facial (Bell's) palsy through a Web service. We used a test data set of 43 images of facial palsy patients and 44 normal people to develop the automatic recognition algorithm. Three different image pre-processing methods were used. Machine learning techniques (support vector machine, SVM) were used to examine the difference between the two halves of the face. If there was a sufficient difference, then the SVM recognized facial palsy. Otherwise, if the halves were roughly symmetrical, the SVM classified the image as normal. It was found that the facial palsy images had a greater Hamming Distance than the normal images, indicating greater asymmetry. The median distance in the normal group was 331 (interquartile range 277-435) and the median distance in the facial palsy group was 509 (interquartile range 334-703). This difference was significant (P
Resumo:
In some applications of data envelopment analysis (DEA) there may be doubt as to whether all the DMUs form a single group with a common efficiency distribution. The Mann-Whitney rank statistic has been used to evaluate if two groups of DMUs come from a common efficiency distribution under the assumption of them sharing a common frontier and to test if the two groups have a common frontier. These procedures have subsequently been extended using the Kruskal-Wallis rank statistic to consider more than two groups. This technical note identifies problems with the second of these applications of both the Mann-Whitney and Kruskal-Wallis rank statistics. It also considers possible alternative methods of testing if groups have a common frontier, and the difficulties of disaggregating managerial and programmatic efficiency within a non-parametric framework. © 2007 Springer Science+Business Media, LLC.
Resumo:
Prices and yields of UK government zero-coupon bonds are used to test alternative yield curve estimation models. Zero-coupon bonds permit a more pure comparison, as the models are providing only the interpolation service and also not making estimation feasible. It is found that better yield curves estimates are obtained by fitting to the yield curve directly rather than fitting first to the discount function. A simple procedure to set the smoothness of the fitted curves is developed, and a positive relationship between oversmoothness and the fitting error is identified. A cubic spline function fitted directly to the yield curve provides the best overall balance of fitting error and smoothness, both along the yield curve and within local maturity regions.
Resumo:
A system for the NDI' testing of the integrity of conposite materials and of adhesive bonds has been developed to meet industrial requirements. The vibration techniques used were found to be applicable to the development of fluid measuring transducers. The vibrational spectra of thin rectangular bars were used for the NDT work. A machined cut in a bar had a significant effect on the spectrum but a genuine crack gave an unambiguous response at high amplitudes. This was the generation of fretting crack noise at frequencies far above that of the drive. A specially designed vibrational decrement meter which, in effect, measures mechanical energy loss enabled a numerical classification of material adhesion to be obtained. This was used to study bars which had been flame or plasma sprayed with a variety of materials. It has become a useful tool in optimising coating methods. A direct industrial application was to classify piston rings of high performance I.C. engines. Each consists of a cast iron ring with a channel into which molybdenum, a good bearing surface, is sprayed. The NDT classification agreed quite well with the destructive test normally used. The techniques and equipment used for the NOT work were applied to the development of the tuning fork transducers investigated by Hassan into commercial density and viscosity devices. Using narrowly spaced, large area tines a thin lamina of fluid is trapped between them. It stores a large fraction of the vibrational energy which, acting as an inertia load reduces the frequency. Magnetostrictive and piezoelectric effects together or in combination enable the fork to be operated through a flange. This allows it to be used in pipeline or 'dipstick' applications. Using a different tine geometry the viscosity loading can be predoninant. This as well as the signal decrement of the density transducer makes a practical viscometer.
Resumo:
Background: Age-related macular degeneration (ARMD) is the leading cause of visual disability in people over 60 years of age in the developed world. The success of treatment deteriorates with increased latency of diagnosis. The purpose of this study was to determine the reliability of the macular mapping test (MMT), and to investigate its potential as a screening tool. Methods: The study population comprised of 31 healthy eyes of 31 participants. To assess reliability, four macular mapping test (MMT) measurements were taken in two sessions separated by one hour by two practitioners, with reversal of order in the second session. MMT readings were also taken from 17 age-related maculopathy (ARM), and 12 AMD affected eyes. Results: For the normal cohort, average MMT scores ranged from 85.5 to 100.0 MMT points. Scores ranged from 79.0 to 99.0 for the ARM group and from 9.0 to 92.0 for the AMD group. MMT scores were reliable to within ± 7.0 points. The difference between AMD affected eyes and controls (z = 3.761, p = < 0.001) was significant. The difference between ARM affected eyes and controls was not significant (z = -0.216, p = 0.829). Conclusion: The reliability data shows that a change of 14 points or more is required to indicate a clinically significant change. This value is required for use of the MMT as an outcome measure in clinical trials. Although there was no difference between MMT scores from ARM affected eyes and controls, the MMT has the advantage over the Amsler grid in that it uses a letter target, has a peripheral fixation aid, and it provides a numerical score. This score could be beneficial in office and home monitoring of AMD progression, as well as an outcome measure in clinical research. © 2005 Bartlett et al; licensee BioMed Central Ltd.
Resumo:
In this second article, statistical ideas are extended to the problem of testing whether there is a true difference between two samples of measurements. First, it will be shown that the difference between the means of two samples comes from a population of such differences which is normally distributed. Second, the 't' distribution, one of the most important in statistics, will be applied to a test of the difference between two means using a simple data set drawn from a clinical experiment in optometry. Third, in making a t-test, a statistical judgement is made as to whether there is a significant difference between the means of two samples. Before the widespread use of statistical software, this judgement was made with reference to a statistical table. Even if such tables are not used, it is useful to understand their logical structure and how to use them. Finally, the analysis of data, which are known to depart significantly from the normal distribution, will be described.
Resumo:
In any investigation in optometry involving more that two treatment or patient groups, an investigator should be using ANOVA to analyse the results assuming that the data conform reasonably well to the assumptions of the analysis. Ideally, specific null hypotheses should be built into the experiment from the start so that the treatments variation can be partitioned to test these effects directly. If 'post-hoc' tests are used, then an experimenter should examine the degree of protection offered by the test against the possibilities of making either a type 1 or a type 2 error. All experimenters should be aware of the complexity of ANOVA. The present article describes only one common form of the analysis, viz., that which applies to a single classification of the treatments in a randomised design. There are many different forms of the analysis each of which is appropriate to the analysis of a specific experimental design. The uses of some of the most common forms of ANOVA in optometry have been described in a further article. If in any doubt, an investigator should consult a statistician with experience of the analysis of experiments in optometry since once embarked upon an experiment with an unsuitable design, there may be little that a statistician can do to help.