858 resultados para Basophil Degranulation Test -- methods
Resumo:
Aim: Contrast sensitivity (CS) provides important information on visual function. This study aimed to assess differences in clinical expediency of the CS increment-matched new back-lit and original paper versions of the Melbourne Edge Test (MET) to determine the CS of the visually impaired. Methods: The back-lit and paper MET were administered to 75 visually impaired subjects (28-97 years). Two versions of the back-lit MET acetates were used to match the CS increments with the paper-based MET. Measures of CS were repeated after 30 min and again in the presence of a focal light source directed onto the MET. Visual acuity was measured with a Bailey-Lovie chart and subjects rated how much difficulty they had with face and vehicle recognition. Results: The back-lit MET gave a significantly higher CS than the paper-based version (14.2 ± 4.1 dB vs 11.3 ± 4.3 dB, p < 0.001). A significantly higher reading resulted with repetition of the paper-based MET (by 1.0 ± 1.7 dB, p < 0.001), but this was not evident with the back-lit MET (by 0.1 ± 1.4 dB, p = 0.53). The MET readings were increased by a focal light source, in both the back-lit (by 0.3 ± 0.81, p < 0.01) and paper-based (1.2 ± 1.7, p < 0.001) versions. CS as measured by the back-lit and paper-based versions of the MET was significantly correlated to patients' perceived ability to recognise faces (r = 0.71, r = 0.85 respectively; p < 0.001) and vehicles (r = 0.67, r = 0.82 respectively; p < 0.001), and with distance visual acuity (both r =-0.64; p < 0.001). Conclusions: The CS increment-matched back-lit MET gives higher CS values than the old paper-based test by approximately 3 dB and is more repeatable and less affected by external light sources. Clinically, the MET score provides information on patient difficulties with visual tasks, such as recognising faces. © 2005 The College of Optometrists.
Resumo:
Usually, data mining projects that are based on decision trees for classifying test cases will use the probabilities provided by these decision trees for ranking classified test cases. We have a need for a better method for ranking test cases that have already been classified by a binary decision tree because these probabilities are not always accurate and reliable enough. A reason for this is that the probability estimates computed by existing decision tree algorithms are always the same for all the different cases in a particular leaf of the decision tree. This is only one reason why the probability estimates given by decision tree algorithms can not be used as an accurate means of deciding if a test case has been correctly classified. Isabelle Alvarez has proposed a new method that could be used to rank the test cases that were classified by a binary decision tree [Alvarez, 2004]. In this paper we will give the results of a comparison of different ranking methods that are based on the probability estimate, the sensitivity of a particular case or both.
Resumo:
Aim: To evaluate OneTouch® Verio™ test strip performance at hypoglycaemic blood glucose (BG) levels (<3.9mmol/L [<70mg/dL]) at seven clinical studies. Methods: Trained clinical staff performed duplicate capillary BG monitoring system tests on 700 individuals with type 1 and type 2 diabetes using blood from a single fingerstick lancing. BG reference values were obtained using a YSI 2300 STAT™ Glucose Analyzer. The number and percentage of BG values within ±0.83. mmol/L (±15. mg/dL) and ±0.56. mmol/L (±10. mg/dL) were calculated at BG concentrations of <3.9. mmol/L (<70. mg/dL), <3.3. mmol/L (<60. mg/dL), and <2.8. mmol/L (<50. mg/dL). Results: At BG concentrations <3.9. mmol/L (<70. mg/dL), 674/674 (100%) of meter results were within ±0.83. mmol/L (±15. mg/dL) and 666/674 (98.8%) were within ±0.56. mmol/L (±10. mg/dL) of reference values. At BG concentrations <3.3. mmol/L (<60. mg/dL), and <2.8. mmol/L (<50. mg/dL), 358/358 (100%) and 270/270 (100%) were within ±0.56. mmol/L (±10. mg/dL) of reference values, respectively. Conclusion: In this analysis of data from seven independent studies, OneTouch Verio test strips provide highly accurate results at hypoglycaemic BG levels. © 2012 Elsevier Ireland Ltd.
Resumo:
The paper treats the task for cluster analysis of a given assembly of objects on the basis of the information contained in the description table of these objects. Various methods of cluster analysis are briefly considered. Heuristic method and rules for classification of the given assembly of objects are presented for the cases when their division into classes and the number of classes is not known. The algorithm is checked by a test example and two program products (PP) – learning systems and software for company management. Analysis of the results is presented.
Resumo:
The purpose of discussed optimal valid partitioning (OVP) methods is uncovering of ordinal or continuous explanatory variables effect on outcome variables of different types. The OVP approach is based on searching partitions of explanatory variables space that in the best way separate observations with different levels of outcomes. Partitions of single variables ranges or two-dimensional admissible areas for pairs of variables are searched inside corresponding families. Statistical validity associated with revealed regularities is estimated with the help of permutation test repeating search of optimal partition for each permuted dataset. Method for output regularities selection is discussed that is based on validity evaluating with the help of two types of permutation tests.
Resumo:
The subject of collinearity of 3 points on a line is challenging for 7th- grade students. This article discusses the possibilities to demonstrate different approaches for solving collinearity of three points on a line: straight angle, axiom on the uniform mapping of an angle on a half-plane, parallel axiom, vector method, and homothetic features. The experiment was held using GEONExT to demonstrate by visual methods collinearity of three points on a line a district type of problems.
Resumo:
Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
2000 Mathematics Subject Classification: 62L10, 62L15.
Resumo:
2000 Mathematics Subject Classification: 62H30, 62J20, 62P12, 68T99
Resumo:
The “trial and error” method is fundamental for Master Minddecision algorithms. On the basis of Master Mind games and strategies weconsider some data mining methods for tests using students as teachers.Voting, twins, opposite, simulate and observer methods are investigated.For a pure data base these combinatorial algorithms are faster then manyAI and Master Mind methods. The complexities of these algorithms arecompared with basic combinatorial methods in AI. ACM Computing Classification System (1998): F.3.2, G.2.1, H.2.1, H.2.8, I.2.6.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015
Resumo:
Background: Self-testing technology allows people to test themselves for chlamydia without professional support. This may result in reassurance and wider access to chlamydia testing, but anxiety could occur on receipt of positive results. This study aimed to identify factors important in understanding self-testing for chlamydia outside formal screening contexts, to explore the potential impacts of self-testing on individuals, and to identify theoretical constructs to form a Framework for future research and intervention development. Methods: Eighteen university students participated in semi-structured interviews; eleven had self-tested for chlamydia. Data were analysed thematically using a Framework approach. Results: Perceived benefits of self-testing included its being convenient, anonymous and not requiring physical examination. There was concern about test accuracy and some participants lacked confidence in using vulvo-vaginal swabs. While some participants expressed concern about the absence of professional support, all said they would seek help on receiving a positive result. Factors identified in Protection Motivation Theory and the Theory of Planned Behaviour, such as response efficacy and self-efficacy, were found to be highly salient to participants in thinking about self-testing. Conclusions: These exploratory findings suggest that self-testing independently of formal health care systems may no more negatively impact people than being tested by health care professionals. Participants’ perceptions about self-testing behaviour were consistent with psychological theories. Findings suggest that interventions which increase confidence in using self-tests and that provide reassurance of test accuracy may increase self-test intentions.
Resumo:
Elemental analysis can become an important piece of evidence to assist the solution of a case. The work presented in this dissertation aims to evaluate the evidential value of the elemental composition of three particular matrices: ink, paper and glass. In the first part of this study, the analytical performance of LIBS and LA-ICP-MS methods was evaluated for paper, writing inks and printing inks. A total of 350 ink specimens were examined including black and blue gel inks, ballpoint inks, inkjets and toners originating from several manufacturing sources and/or batches. The paper collection set consisted of over 200 paper specimens originating from 20 different paper sources produced by 10 different plants. Micro-homogeneity studies show smaller variation of elemental compositions within a single source (i.e., sheet, pen or cartridge) than the observed variation between different sources (i.e., brands, types, batches). Significant and detectable differences in the elemental profile of the inks and paper were observed between samples originating from different sources (discrimination of 87–100% of samples, depending on the sample set under investigation and the method applied). These results support the use of elemental analysis, using LA-ICP-MS and LIBS, for the examination of documents and provide additional discrimination to the currently used techniques in document examination. In the second part of this study, a direct comparison between four analytical methods (µ-XRF, solution-ICP-MS, LA-ICP-MS and LIBS) was conducted for glass analyses using interlaboratory studies. The data provided by 21 participants were used to assess the performance of the analytical methods in associating glass samples from the same source and differentiating different sources, as well as the use of different match criteria (confidence interval (±6s, ±5s, ±4s, ±3s, ±2s), modified confidence interval, t-test (sequential univariate, p=0.05 and p=0.01), t-test with Bonferroni correction (for multivariate comparisons), range overlap, and Hotelling's T2 tests. Error rates (Type 1 and Type 2) are reported for the use of each of these match criteria and depend on the heterogeneity of the glass sources, the repeatability between analytical measurements, and the number of elements that were measured. The study provided recommendations for analytical performance-based parameters for µ-XRF and LA-ICP-MS as well as the best performing match criteria for both analytical techniques, which can be applied now by forensic glass examiners.
Resumo:
The adverse health effects of long-term exposure to lead are well established, with major uptake into the human body occurring mainly through oral ingestion by young children. Lead-based paint was frequently used in homes built before 1978, particularly in inner-city areas. Minority populations experience the effects of lead poisoning disproportionately. ^ Lead-based paint abatement is costly. In the United States, residents of about 400,000 homes, occupied by 900,000 young children, lack the means to correct lead-based paint hazards. The magnitude of this problem demands research on affordable methods of hazard control. One method is encapsulation, defined as any covering or coating that acts as a permanent barrier between the lead-based paint surface and the environment. ^ Two encapsulants were tested for reliability and effective life span through an accelerated lifetime experiment that applied stresses exceeding those encountered under normal use conditions. The resulting time-to-failure data were used to extrapolate the failure time under conditions of normal use. Statistical analysis and models of the test data allow forecasting of long-term reliability relative to the 20-year encapsulation requirement. Typical housing material specimens simulating walls and doors coated with lead-based paint were overstressed before encapsulation. A second, un-aged set was also tested. Specimens were monitored after the stress test with a surface chemical testing pad to identify the presence of lead breaking through the encapsulant. ^ Graphical analysis proposed by Shapiro and Meeker and the general log-linear model developed by Cox were used to obtain results. Findings for the 80% reliability time to failure varied, with close to 21 years of life under normal use conditions for encapsulant A. The application of product A on the aged gypsum and aged wood substrates yielded slightly lower times. Encapsulant B had an 80% reliable life of 19.78 years. ^ This study reveals that encapsulation technologies can offer safe and effective control of lead-based paint hazards and may be less expensive than other options. The U.S. Department of Health and Human Services and the CDC are committed to eliminating childhood lead poisoning by 2010. This ambitious target is feasible, provided there is an efficient application of innovative technology, a goal to which this study aims to contribute. ^
Resumo:
The purpose of this study was to aid in understanding the relationship between current Reading report card grading practices and standards-based state standardized testing results in Reading and factors associated with the alignment of this relationship. Report card and Florida Comprehensive Assessment Test (FLAT) data for 2004 were collected for 1064 third grade students in nine schools of one feeder pattern in Florida's Miami-Dade County Public Schools. A Third Grade Teacher Questionnaire was administered to 48 Reading teachers. The questionnaire contained items relating to teachers' education, teaching experience, grading practices, and beliefs about the FCAT, instructional Reading activities, methods, and materials. ^ Findings of this study support a strong relationship between report card grades and FCAT Reading achievement levels. However, individual school correlational analysis showed significant differences among schools' alignment measures. Higher teacher alignment between grades and FCAT levels was associated with teachers spending more time on individualized methods of Reading instruction and to teachers feeling there was not enough time to teach and help individual students. Lower teacher alignment of grades and achievement levels was associated with teachers taking homework into account in the final Reading grade. Teacher alignment of grades and achievement levels was not associated with teacher beliefs concerning the FCAT, instructional activities in Reading and Language Arts, the Reading program used, the model of delivery of the Reading program, instruction or type of instructional planning done by the teachers. ^ This study highlights the need for further investigations related to determining additional teacher factors that may affect the alignment relationship between report card grades and standards-based state standardized testing results. ^