967 resultados para Standard fire tests
Resumo:
The aim of this in vitro study was to assess the agreement among four techniques used as gold standard for the validation of methods for occlusal caries detection. Sixty-five human permanent molars were selected and one site in each occlusal surface was chosen as the test site. The teeth were cut and prepared according to each technique: stereomicroscopy without coloring (1), dye enhancement with rhodamine B (2) and fuchsine/acetic light green (3), and semi-quantitative microradiography (4). Digital photographs from each prepared tooth were assessed by three examiners for caries extension. Weighted kappa, as well as Friedman's test with multiple comparisons, was performed to compare all techniques and verify statistical significant differences. Results: kappa values varied from 0.62 to 0.78, the latter being found by both dye enhancement methods. Friedman's test showed statistical significant difference (P < 0.001) and multiple comparison identified these differences among all techniques, except between both dye enhancement methods (rhodamine B and fuchsine/acetic light green). Cross-tabulation showed that the stereomicroscopy overscored the lesions. Both dye enhancement methods showed a good agreement, while stereomicroscopy overscored the lesions. Furthermore, the outcome of caries diagnostic tests may be influenced by the validation method applied. Dye enhancement methods seem to be reliable as gold standard methods.
Resumo:
Inert gas washout tests, performed using the single- or multiple-breath washout technique, were first described over 60 years ago. As measures of ventilation distribution inhomogeneity, they offer complementary information to standard lung function tests, such as spirometry, as well as improved feasibility across wider age ranges and improved sensitivity in the detection of early lung damage. These benefits have led to a resurgence of interest in these techniques from manufacturers, clinicians and researchers, yet detailed guidelines for washout equipment specifications, test performance and analysis are lacking. This manuscript provides recommendations about these aspects, applicable to both the paediatric and adult testing environment, whilst outlining the important principles that are essential for the reader to understand. These recommendations are evidence based, where possible, but in many places represent expert opinion from a working group with a large collective experience in the techniques discussed. Finally, the important issues that remain unanswered are highlighted. By addressing these important issues and directing future research, the hope is to facilitate the incorporation of these promising tests into routine clinical practice.
Resumo:
BACKGROUND: Congestive heart failure (CHF) is a major public health problem. The use of B-type natriuretic peptide (BNP) tests shows promising diagnostic accuracy. Herein, we summarize the evidence on the accuracy of BNP tests in the diagnosis of CHF and compare the performance of rapid enzyme-linked immunosorbent assay (ELISA) and standard radioimmunosorbent assay (RIA) tests. METHODS: We searched electronic databases and the reference lists of included studies, and we contacted experts. Data were extracted on the study population, the type of test used, and methods. Receiver operating characteristic (ROC) plots and summary ROC curves were produced and negative likelihood ratios pooled. Random-effect meta-analysis and metaregression were used to combine data and explore sources of between-study heterogeneity. RESULTS: Nineteen studies describing 22 patient populations (9 ELISA and 13 RIA) and 9093 patients were included. The diagnosis of CHF was verified by echocardiography, radionuclide scan, or echocardiography combined with clinical criteria. The pooled negative likelihood ratio overall from random-effect meta-analysis was 0.18 (95% confidence interval [CI], 0.13-0.23). It was lower for the ELISA test (0.12; 95% CI, 0.09-0.16) than for the RIA test (0.23; 95% CI, 0.16-0.32). For a pretest probability of 20%, which is typical for patients with suspected CHF in primary care, a negative result of the ELISA test would produce a posttest probability of 2.9%; a negative RIA test, a posttest probability of 5.4%. CONCLUSIONS: The use of BNP tests to rule out CHF in primary care settings could reduce demand for echocardiography. The advantages of rapid ELISA tests need to be balanced against their higher cost.
Resumo:
The objectives of this study were to develop and validate a tool for assessing pain in population-based observational studies and to develop three subscales for back/neck, upper extremity and lower extremity pain. Based on a literature review, items were extracted from validated questionnaires and reviewed by an expert panel. The initial questionnaire consisted of a pain manikin and 34 items relating to (i) intensity of pain in different body regions (7 items), (ii) pain during activities of daily living (18 items) and (iii) various pain modalities (9 items). Psychometric validation of the initial questionnaire was performed in a random sample of the German-speaking Swiss population. Analyses included tests for reliability, correlation analysis, principal components factor analysis, tests for internal consistency and validity. Overall, 16,634 of 23,763 eligible individuals participated (70%). Test-retest reliability coefficients ranged from 0.32 to 0.97, but only three coefficients were below 0.60. Subscales were constructed combining four items for each of the subscales. Item-total coefficients ranged from 0.76 to 0.86 and Cronbach's alpha were 0.75 or higher for all subscales. Correlation coefficients between subscales and three validated instruments (WOMAC, SPADI and Oswestry) ranged from 0.62 to 0.79. The final Pain Standard Evaluation Questionnaire (SEQ Pain) included 28 items and the pain manikin and accounted for the multidimensionality of pain by assessing pain location and intensity, pain during activity, triggers and time of onset of pain and frequency of pain medication. It was found to be reliable and valid for the assessment of pain in population-based observational studies.
Resumo:
OBJECTIVE: To report our experience with the successful removal of visible tension-free vaginal tape (TVT) by standard transurethral electroresection, as intravesical tape erosion after TVT is a rare complication, and removal can be challenging, with few cases reported. PATIENTS AND METHODS: Five patients presenting with TVT erosion into the bladder were treated at our institutions from December 2004 to July 2007; all had standard transurethral electroresection. Their records were reviewed retrospectively to retrieve data on presenting symptoms, diagnostic tests, surgical procedures and outcomes. RESULTS: The median (range) interval between the TVT procedure and the onset of symptoms was 17 (1-32) months. The predominant symptoms were painful micturition, recurrent urinary tract infection (UTI), urgency and urge incontinence. There were no complications during surgery. The storage symptoms and UTI resolved completely after removing the eroded mesh in all but one patient. Cystoscopy at 1 month after surgery showed complete healing of the bladder mucosa. CONCLUSION: Although TVT erosion into the bladder is rare, persistent symptoms, particularly recurrent UTIs, must raise some suspicion for this condition. Standard transurethral electroresection seems to be a safe, simple, minimally invasive and successful treatment option for TVT removal.
Resumo:
There is no accepted way of measuring prothrombin time without time loss for patients undergoing major surgery who are at risk of intraoperative dilution and consumption coagulopathy due to bleeding and volume replacement with crystalloids or colloids. Decisions to transfuse fresh frozen plasma and procoagulatory drugs have to rely on clinical judgment in these situations. Point-of-care devices are considerably faster than the standard laboratory methods. In this study we assessed the accuracy of a Point-of-care (PoC) device measuring prothrombin time compared to the standard laboratory method. Patients undergoing major surgery and intensive care unit patients were included. PoC prothrombin time was measured by CoaguChek XS Plus (Roche Diagnostics, Switzerland). PoC and reference tests were performed independently and interpreted under blinded conditions. Using a cut-off prothrombin time of 50%, we calculated diagnostic accuracy measures, plotted a receiver operating characteristic (ROC) curve and tested for equivalence between the two methods. PoC sensitivity and specificity were 95% (95% CI 77%, 100%) and 95% (95% CI 91%, 98%) respectively. The negative likelihood ratio was 0.05 (95% CI 0.01, 0.32). The positive likelihood ratio was 19.57 (95% CI 10.62, 36.06). The area under the ROC curve was 0.988. Equivalence between the two methods was confirmed. CoaguChek XS Plus is a rapid and highly accurate test compared with the reference test. These findings suggest that PoC testing will be useful for monitoring intraoperative prothrombin time when coagulopathy is suspected. It could lead to a more rational use of expensive and limited blood bank resources.
Resumo:
Background: Dementia is a multifaceted disorder that impairs cognitive functions, such as memory, language, and executive functions necessary to plan, organize, and prioritize tasks required for goal-directed behaviors. In most cases, individuals with dementia experience difficulties interacting with physical and social environments. The purpose of this study was to establish ecological validity and initial construct validity of a fire evacuation Virtual Reality Day-Out Task (VR-DOT) environment based on performance profiles as a screening tool for early dementia. Objective: The objectives were (1) to examine the relationships among the performances of 3 groups of participants in the VR-DOT and traditional neuropsychological tests employed to assess executive functions, and (2) to compare the performance of participants with mild Alzheimer’s-type dementia (AD) to those with amnestic single-domain mild cognitive impairment (MCI) and healthy controls in the VR-DOT and traditional neuropsychological tests used to assess executive functions. We hypothesized that the 2 cognitively impaired groups would have distinct performance profiles and show significantly impaired independent functioning in ADL compared to the healthy controls. Methods: The study population included 3 groups: 72 healthy control elderly participants, 65 amnestic MCI participants, and 68 mild AD participants. A natural user interface framework based on a fire evacuation VR-DOT environment was used for assessing physical and cognitive abilities of seniors over 3 years. VR-DOT focuses on the subtle errors and patterns in performing everyday activities and has the advantage of not depending on a subjective rating of an individual person. We further assessed functional capacity by both neuropsychological tests (including measures of attention, memory, working memory, executive functions, language, and depression). We also evaluated performance in finger tapping, grip strength, stride length, gait speed, and chair stands separately and while performing VR-DOTs in order to correlate performance in these measures with VR-DOTs because performance while navigating a virtual environment is a valid and reliable indicator of cognitive decline in elderly persons. Results: The mild AD group was more impaired than the amnestic MCI group, and both were more impaired than healthy controls. The novel VR-DOT functional index correlated strongly with standard cognitive and functional measurements, such as mini-mental state examination (MMSE; rho=0.26, P=.01) and Bristol Activities of Daily Living (ADL) scale scores (rho=0.32, P=.001). Conclusions: Functional impairment is a defining characteristic of predementia and is partly dependent on the degree of cognitive impairment. The novel virtual reality measures of functional ability seem more sensitive to functional impairment than qualitative measures in predementia, thus accurately differentiating from healthy controls. We conclude that VR-DOT is an effective tool for discriminating predementia and mild AD from controls by detecting differences in terms of errors, omissions, and perseverations while measuring ADL functional ability.
Resumo:
In the past 2 decades, we have observed a rapid increase of infections due to multidrug-resistant Enterobacteriaceae. Regrettably, these isolates possess genes encoding for extended-spectrum β-lactamases (e.g., blaCTX-M, blaTEM, blaSHV) or plasmid-mediated AmpCs (e.g., blaCMY) that confer resistance to last-generation cephalosporins. Furthermore, other resistance traits against quinolones (e.g., mutations in gyrA and parC, qnr elements) and aminoglycosides (e.g., aminoglycosides modifying enzymes and 16S rRNA methylases) are also frequently co-associated. Even more concerning is the rapid increase of Enterobacteriaceae carrying genes conferring resistance to carbapenems (e.g., blaKPC, blaNDM). Therefore, the spread of these pathogens puts in peril our antibiotic options. Unfortunately, standard microbiological procedures require several days to isolate the responsible pathogen and to provide correct antimicrobial susceptibility test results. This delay impacts the rapid implementation of adequate antimicrobial treatment and infection control countermeasures. Thus, there is emerging interest in the early and more sensitive detection of resistance mechanisms. Modern non-phenotypic tests are promising in this respect, and hence, can influence both clinical outcome and healthcare costs. In this review, we present a summary of the most advanced methods (e.g., next-generation DNA sequencing, multiplex PCRs, real-time PCRs, microarrays, MALDI-TOF MS, and PCR/ESI MS) presently available for the rapid detection of antibiotic resistance genes in Enterobacteriaceae. Taking into account speed, manageability, accuracy, versatility, and costs, the possible settings of application (research, clinic, and epidemiology) of these methods and their superiority against standard phenotypic methods are discussed.
Resumo:
Methods are described for working with Nosema apis and Nosema ceranae in the field and in the laboratory. For fieldwork, different sampling methods are described to determine colony level infections at a given point in time, but also for following the temporal infection dynamics. Suggestions are made for how to standardise field trials for evaluating treatments and disease impact. The laboratory methods described include different means for determining colony level and individual bee infection levels and methods for species determination, including light microscopy, electron microscopy, and molecular methods (PCR). Suggestions are made for how to standardise cage trials, and different inoculation methods for infecting bees are described, including control methods for spore viability. A cell culture system for in vitro rearing of Nosema spp. is described. Finally, how to conduct different types of experiments are described, including infectious dose, dose effects, course of infection and longevity tests
Resumo:
We investigated if CLSI M27-A2 Candida species breakpoints for fluconazole MIC are valid when read at 24 h. Analysis of a data set showed good correlation between 48- and 24-h MICs, as well as similar outcomes and pharmacodynamic efficacy parameters, except for isolates in the susceptible dose-dependent category, such as Candida glabrata.
Resumo:
OBJECTIVE The cost-effectiveness of cast nonprecious frameworks has increased their prevalence in cemented implant crowns. The purpose of this study was to assess the effect of the design and height of the retentive component of a standard titanium implant abutment on the fit, possible horizontal rotation and retention forces of cast nonprecious alloy crowns prior to cementation. MATERIALS AND METHODS Two abutment designs were examined: Type A with a 6° taper and 8 antirotation planes (Straumann Tissue-Level RN) and Type B with a 7.5° taper and 1 antirotation plane (SICace implant). Both types were analyzed using 60 crowns: 20 with a full abutment height (6 mm), 20 with a medium abutment height (4 mm), and 20 with a minimal (2.5 mm) abutment height. The marginal and internal fit and the degree of possible rotation were evaluated by using polyvinylsiloxane impressions under a light microscope (magnification of ×50). To measure the retention force, a custom force-measuring device was employed. STATISTICAL ANALYSIS one-sided Wilcoxon rank-sum tests with Bonferroni-Holm corrections, Fisher's exact tests, and Spearman's rank correlation coefficient. RESULTS Type A exhibited increased marginal gaps (primary end-point: 55 ± 20 μm vs. 138 ± 59 μm, P < 0.001) but less rotation (P < 0.001) than Type B. The internal fit was also better for Type A than for Type B (P < 0.001). The retention force of Type A (2.49 ± 3.2 N) was higher (P = 0.019) than that of Type B (1.27 ± 0.84 N). Reduction in abutment height did not affect the variables observed. CONCLUSION Less-tapered abutments with more antirotation planes provide an increase in the retention force, which confines the horizontal rotation but widens the marginal gaps of the crowns. Thus, casting of nonprecious crowns with Type A abutments may result in clinically unfavorable marginal gaps.
Resumo:
Charcoal analysis was conducted on sediment cores from three lakes to assess the relationship between the area and number of charcoal particles. Three charcoal-size parameters (maximum breadth, maximum length and area) were measured on sediment samples representing various vegetation types, including shrub tundra, boreal forest and temperate forest. These parameters and charcoal size-class distributions do not differ statistically between two sites where the same preparation technique (glycerine pollen slides) was used, but they differ for the same core when different techniques were applied. Results suggest that differences in charcoal size and size-class distribution are mainly caused by different preparation techniques and are not related to vegetation-type variation. At all three sites, the area and number concentrations of charcoal particles are highly correlated in standard pollen slides; 82–83% of the variability of the charcoal-area concentration can be explained by the particle-number concentration. Comparisons between predicted and measured area concentrations show that regression equations linking charcoal number and area concentrations can be used across sites as long as the same pollen-preparation technique is used. Thus it is concluded that it is unnecessary to measure charcoal areas in standard pollen slides – a time-consuming and tedious process.
Resumo:
Linkage and association studies are major analytical tools to search for susceptibility genes for complex diseases. With the availability of large collection of single nucleotide polymorphisms (SNPs) and the rapid progresses for high throughput genotyping technologies, together with the ambitious goals of the International HapMap Project, genetic markers covering the whole genome will be available for genome-wide linkage and association studies. In order not to inflate the type I error rate in performing genome-wide linkage and association studies, multiple adjustment for the significant level for each independent linkage and/or association test is required, and this has led to the suggestion of genome-wide significant cut-off as low as 5 × 10 −7. Almost no linkage and/or association study can meet such a stringent threshold by the standard statistical methods. Developing new statistics with high power is urgently needed to tackle this problem. This dissertation proposes and explores a class of novel test statistics that can be used in both population-based and family-based genetic data by employing a completely new strategy, which uses nonlinear transformation of the sample means to construct test statistics for linkage and association studies. Extensive simulation studies are used to illustrate the properties of the nonlinear test statistics. Power calculations are performed using both analytical and empirical methods. Finally, real data sets are analyzed with the nonlinear test statistics. Results show that the nonlinear test statistics have correct type I error rates, and most of the studied nonlinear test statistics have higher power than the standard chi-square test. This dissertation introduces a new idea to design novel test statistics with high power and might open new ways to mapping susceptibility genes for complex diseases. ^
Resumo:
Objectives: The purpose of this study was to evaluate the effectiveness of the Danger Rangers Fire Safety Curriculum in increasing the fire safety knowledge of low-income, minority children in pre-kindergarten to third grade in Austin, TX during a summer day camp in 2007.^ Methods: Data was collected from child participants via teacher and researcher administered tests at pretest, posttest (immediately after the completion of the fire safety module), and at a 3 week follow-up to asses retention. In addition, a self-administered questionnaire was collected from parents pre- and post-intervention to assess home-related fire/burn risk factors. Paired t-tests were conducted using STATA 12.0 to evaluate pretest, posttest, and retention test mean scores as well as mean fire safety rules listed by grade group. McNemar's test was used to determine if there was a difference in fire-related risk factors as reported by the parents of the participants before and after the intervention. Only those who had paired data for the tests/surveys being compared were included in the analysis.^ Results: The first/second grade group and the third grade group scored significantly higher on fire safety knowledge on the posttest compared to the pretest (p<0.0001 for both groups). However, there was no significant change in knowledge scores for the pre-kindergarten to kindergarten group (p=0.14). Among the first/second grade group, knowledge levels did not significantly decline between the posttest and retention test (p=0.25). However, the third grade group had significantly lower fire safety knowledge scores on the retention test compared to the posttest (p<0.001). A similar increase was seen in the amount of fire safety rules listed after the intervention (p<0.0001 between pre and posttest for both the first/second grade and third grade groups), with no decline from the posttest to the retention test (p=0.50) for the first/second grade group, but a significant decline in the third grade group (p=0.001). McNemar's chi-square test showed a significant increase in the percentage of participants' parents reporting smoke detector testing on a regular basis and having a fire escape plan for their family after the intervention (p=0.01 and p<0.0001, respectively). However, there was no significant change in the frequency of reports of the child playing in the kitchen while the parent cooks or the house/apartment having a working smoke detector.^ Conclusion: We found that general fire safety knowledge improved and the number of specific fire safety rules increased among the first to third grade children who participated in the Danger Rangers fire safety program. However, it did not significantly increase general fire safety knowledge among the pre-k/k group. This study also showed that a program targeted towards children has the potential to influence familial risk factors by proxy. The Danger Rangers Fire Safety Curriculum should be further evaluated by conducting a randomized controlled trial, using valid measures that assess fire safety attitudes, beliefs, behaviors, as well as fire/burn related outcomes.^
Resumo:
The observed long-term decrease in the regional fire activity of Eastern Canada results in excessive accumulation of organic layer on the forest floor of coniferous forests, which may affect climate-growth relationships in canopy trees. To test this hypothesis, we related tree-ring chronologies of black spruce (Picea mariana (Mill.) B.S.P.) to soil organic layer (SOL) depth at the stand scale in the lowland forests of Quebec's Clay Belt. Late-winter and early-spring temperatures and temperature at the end of the previous year's growing season were the major monthly level environmental controls of spruce growth. The effect of SOL on climate-growth relationships was moderate and reversed the association between tree growth and summer aridity from a negative to a positive relationship: trees growing on thin organic layers were thus negatively affected by drought, whereas it was the opposite for sites with deep (>20-30 cm) organic layers. This indicates the development of wetter conditions on sites with thicker SOL. Deep SOL were also associated with an increased frequency of negative growth anomalies (pointer years) in tree-ring chronologies. Our results emphasize the presence of nonlinear growth responses to SOL accumulation, suggesting 20-30 cm as a provisional threshold with respect to the effects of SOL on the climate-growth relationship. Given the current climatic conditions characterized by generally low-fire activity and a trend toward accumulation of SOL, the importance of SOL effects in the black spruce ecosystem is expected to increase in the future.