35 resultados para Laboratory tests.
Resumo:
INTRODUCTION Erythema exsudativum multiforme majus (EEMM) and Stevens-Johnson Syndrome (SJS) are severe cutaneous reaction patterns caused by infections or drug hypersensitivity. The mechanism by which widespread keratinocyte death is mediated by the immune system in EEMM/SJS are still to be elucidated. Here, we characterized the blister cells isolated from a patient with EEMM/SJS overlap and investigated its cause. METHODS Clinical classification of the cutaneous eruption was done according to the consensus definition of severe blistering skin reactions and histological analysis. Common infectious causes of EEMM were investigated using standard clinical techniques. T cell reactivity for potentially causative drugs was assessed by lymphocyte transformation tests (LTT). Lymphocytes isolated from blister fluid were analyzed for their expression of activation markers and cytotoxic molecules using flow cytometry. RESULTS The healthy 58 year-old woman suffered from mild respiratory tract infection and therefore started treatment with the secretolytic drug Ambroxol. One week later, she presented with large palmar and plantar blisters, painful mucosal erosions, and flat atypical target lesions and maculae on the trunc, thus showing the clinical picture of an EEMM/SJS overlap (Fig. 1). This diagnosis was supported by histology, where also eosinophils were found to infiltrate the upper dermis, thus pointing towards a cutaneous adverse drug reaction (cADR). Analysis of blister cells showed that they mainly consisted of CD8+ and CD4+ T cells and a smaller population of NK cells. Both the CD8+ T cells and the NK cells were highly activated and expressed Fas ligand and the cytotoxic molecule granulysin (Fig. 2). In addition, in comparison to NK cells from PBMC, NK cells in blister fluids strongly upregulated the expression of the skin-homing chemokine receptor CCR4 (Fig 4). Surprisingly, the LTT performed on PBMCs in the acute phase was positive for Ambroxol (SI=2.9) whereas a LTT from a healthy but exposed individual did not show unspecific proliferation. Laboratory tests for common infectious causes of EEMM were negative (HSV-1/-2, M. pneumoniae, Parvovirus B19). However, 6 weeks later, specific proliferation to Ambroxol could no longer be observed in the LTT (Fig 4.).
Resumo:
Antimicrobial drugs may be used to treat diarrheal illness in companion animals. It is important to monitor antimicrobial use to better understand trends and patterns in antimicrobial resistance. There is no monitoring of antimicrobial use in companion animals in Canada. To explore how the use of electronic medical records could contribute to the ongoing, systematic collection of antimicrobial use data in companion animals, anonymized electronic medical records were extracted from 12 participating companion animal practices and warehoused at the University of Calgary. We used the pre-diagnostic, clinical features of diarrhea as the case definition in this study. Using text-mining technologies, cases of diarrhea were described by each of the following variables: diagnostic laboratory tests performed, the etiological diagnosis and antimicrobial therapies. The ability of the text miner to accurately describe the cases for each of the variables was evaluated. It could not reliably classify cases in terms of diagnostic tests or etiological diagnosis; a manual review of a random sample of 500 diarrhea cases determined that 88/500 (17.6%) of the target cases underwent diagnostic testing of which 36/88 (40.9%) had an etiological diagnosis. Text mining, compared to a human reviewer, could accurately identify cases that had been treated with antimicrobials with high sensitivity (92%, 95% confidence interval, 88.1%-95.4%) and specificity (85%, 95% confidence interval, 80.2%-89.1%). Overall, 7400/15,928 (46.5%) of pets presenting with diarrhea were treated with antimicrobials. Some temporal trends and patterns of the antimicrobial use are described. The results from this study suggest that informatics and the electronic medical records could be useful for monitoring trends in antimicrobial use.
Resumo:
Polymorbid patients, diverse diagnostic and therapeutic options, more complex hospital structures, financial incentives, benchmarking, as well as perceptional and societal changes put pressure on medical doctors, specifically if medical errors surface. This is particularly true for the emergency department setting, where patients face delayed or erroneous initial diagnostic or therapeutic measures and costly hospital stays due to sub-optimal triage. A "biomarker" is any laboratory tool with the potential better to detect and characterise diseases, to simplify complex clinical algorithms and to improve clinical problem solving in routine care. They must be embedded in clinical algorithms to complement and not replace basic medical skills. Unselected ordering of laboratory tests and shortcomings in test performance and interpretation contribute to diagnostic errors. Test results may be ambiguous with false positive or false negative results and generate unnecessary harm and costs. Laboratory tests should only be ordered, if results have clinical consequences. In studies, we must move beyond the observational reporting and meta-analysing of diagnostic accuracies for biomarkers. Instead, specific cut-off ranges should be proposed and intervention studies conducted to prove outcome relevant impacts on patient care. The focus of this review is to exemplify the appropriate use of selected laboratory tests in the emergency setting for which randomised-controlled intervention studies have proven clinical benefit. Herein, we focus on initial patient triage and allocation of treatment opportunities in patients with cardiorespiratory diseases in the emergency department. The following five biomarkers will be discussed: proadrenomedullin for prognostic triage assessment and site-of-care decisions, cardiac troponin for acute myocardial infarction, natriuretic peptides for acute heart failure, D-dimers for venous thromboembolism, C-reactive protein as a marker of inflammation, and procalcitonin for antibiotic stewardship in infections of the respiratory tract and sepsis. For these markers we provide an overview on physiopathology, historical evolution of evidence, strengths and limitations for a rational implementation into clinical algorithms. We critically discuss results from key intervention trials that led to their use in clinical routine and potential future indications. The rational for the use of all these biomarkers, first, tackle diagnostic ambiguity and consecutive defensive medicine, second, delayed and sub-optimal therapeutic decisions, and third, prognostic uncertainty with misguided triage and site-of-care decisions all contributing to the waste of our limited health care resources. A multifaceted approach for a more targeted management of medical patients from emergency admission to discharge including biomarkers, will translate into better resource use, shorter length of hospital stay, reduced overall costs, improved patients satisfaction and outcomes in terms of mortality and re-hospitalisation. Hopefully, the concepts outlined in this review will help the reader to improve their diagnostic skills and become more parsimonious laboratory test requesters.
Resumo:
This study evaluated the correlation between three strip-type, colorimetric tests and two laboratory methods with respect to the analysis of salivary buffering. The strip-type tests were saliva-check buffer, Dentobuff strip and CRT(®) Buffer test. The laboratory methods included Ericsson's laboratory method and a monotone acid/base titration to create a reference scale for the salivary titratable acidity. Additionally, defined buffer solutions were prepared and tested to simulate the carbonate, phosphate and protein buffer systems of saliva. The correlation between the methods was analysed by the Spearman's rank test. Disagreement was detected between buffering capacity values obtained with three strip-type tests that was more pronounced in case of saliva samples with medium and low buffering capacities. All strip-type tests were able to assign the hydrogencarbonate, di-hydrogenphosphate and 0.1% protein buffer solutions to the correct buffer categories. However, at 0.6% total protein concentrations, none of the test systems worked accurately. Improvements are necessary for strip-type tests because of certain disagreement with the Ericsson's laboratory method and dependence on the protein content of saliva.
Resumo:
New oral anticoagulants promise to overcome essential drawbacks of traditional substances. They have a predictable therapeutic effect, a wide therapeutic window, only limited interaction with food and drugs and can be administered p.o. with a fixed dose. On the other hand, knowledge on the laboratory management of new anticoagulants is limited. In the present article we discuss possible indications and available assays for monitoring of Rivaroxaban, Apixaban and Dabigatran. Furthermore, we discuss interpretation of routine coagulation tests during therapy with these new drugs.
Resumo:
Gastroesophageal reflux disease (GERD) still remains the most common out- GI-related condition in the out-patient setting. While primary care physicians often use empiric trials with proton pump inhibitors (PPI trial) to diagnose GERD, often specialised tests are required to confirm or exclude gastroesophageal reflux causing esophageal or extraesophageal symptoms. The most commonly used procedures to diagnose GERD include: conventional (catheter based) pH monitoring, wireless esophageal pH monitoring (Bravo), bilirubin monitoring (Bilitec), and combined multichannel intraluminal impedance-pH monitoring (MII-pH). Each technique has strengths and limitations of which clinicians and investigators should be aware when deciding which one to choose.
Resumo:
The first part of this three-part review on the relevance of laboratory testing of composites and adhesives deals with approval requirements for composite materials. We compare the in vivo and in vitro literature data and discuss the relevance of in vitro analyses. The standardized ISO protocols are presented, with a focus on the evaluation of physical parameters. These tests all have a standardized protocol that describes the entire test set-up. The tests analyse flexural strength, depth of cure, susceptibility to ambient light, color stability, water sorption and solubility, and radiopacity. Some tests have a clinical correlation. A high flexural strength, for instance, decreases the risk of fractures of the marginal ridge in posterior restorations and incisal edge build-ups of restored anterior teeth. Other tests do not have a clinical correlation or the threshold values are too low, which results in an approval of materials that show inferior clinical properties (e.g., radiopacity). It is advantageous to know the test set-ups and the ideal threshold values to correctly interpret the material data. Overall, however, laboratory assessment alone cannot ensure the clinical success of a product.
Resumo:
INTRODUCTION: Rivaroxaban (RXA) is licensed for prophylaxis of venous thromboembolism after major orthopaedic surgery of the lower limbs. Currently, no test to quantify RXA in plasma has been validated in an inter-laboratory setting. Our study had three aims: to assess i) the feasibility of RXA quantification with a commercial anti-FXa assay, ii) its accuracy and precision in an inter-laboratory setting, and iii) the influence of 10mg of RXA on routine coagulation tests. METHODS: The same chromogenic anti-FXa assay (Hyphen BioMed) was used in all participating laboratories. RXA calibrators and sets of blinded probes (aim ii.) were prepared in vitro by spiking normal plasma. The precise RXA content was assessed by high-pressure liquid chromatography-tandem mass spectrometry. For ex-vivo studies (aim iii), plasma samples from 20 healthy volunteers taken before and 2 - 3hours after ingestion of 10mg of RXA were analyzed by participating laboratories. RESULTS: RXA can be assayed chromogenically. Among the participating laboratories, the mean accuracy and the mean coefficient of variation for precision of RXA quantification were 7.0% and 8.8%, respectively. Mean RXA concentration was 114±43?g/L .RXA significantly altered prothrombin time, activated partial thromboplastin time, factor analysis for intrinsic and extrinsic factors. Determinations of thrombin time, fibrinogen, FXIII and D-Dimer levels were not affected. CONCLUSIONS: RXA plasma levels can be quantified accurately and precisely by a chromogenic anti-FXa assay on different coagulometers in different laboratories. Ingestion of 10mg RXA results in significant alterations of both PT- and aPTT-based coagulation assays.
Resumo:
In animal experiments, animals, husbandry and test procedures are traditionally standardized to maximize test sensitivity and minimize animal use, assuming that this will also guarantee reproducibility. However, by reducing within-experiment variation, standardization may limit inference to the specific experimental conditions. Indeed, we have recently shown in mice that standardization may generate spurious results in behavioral tests, accounting for poor reproducibility, and that this can be avoided by population heterogenization through systematic variation of experimental conditions. Here, we examined whether a simple form of heterogenization effectively improves reproducibility of test results in a multi-laboratory situation. Each of six laboratories independently ordered 64 female mice of two inbred strains (C57BL/6NCrl, DBA/2NCrl) and examined them for strain differences in five commonly used behavioral tests under two different experimental designs. In the standardized design, experimental conditions were standardized as much as possible in each laboratory, while they were systematically varied with respect to the animals' test age and cage enrichment in the heterogenized design. Although heterogenization tended to improve reproducibility by increasing within-experiment variation relative to between-experiment variation, the effect was too weak to account for the large variation between laboratories. However, our findings confirm the potential of systematic heterogenization for improving reproducibility of animal experiments and highlight the need for effective and practicable heterogenization strategies.
Resumo:
Avidity tests can be used to discriminate between cattle that are acutely and chronically infected with the intracellular parasite Neospora caninum. The aim of this study was to compare the IgG avidity ELISA tests being used in four European laboratories. A coded panel of 200 bovine sera from well documented naturally and experimentally N. caninum infected animals were analysed at the participating laboratories by their respective assay systems and laboratory protocols. Comparing the numeric test results, the concordance correlation coefficients were between 0.479 and 0.776. The laboratories categorize the avidity results into the classes "low" and "high" which are considered indicative of recent and chronic infection, respectively. Three laboratories also use an "intermediate" class. When the categorized data were analysed by Kappa statistics there was moderate to substantial agreements between the laboratories. There was an overall better agreement for dichotomized results than when an intermediate class was also used. Taken together, this first ring test for N. caninum IgG avidity assays showed a moderate agreement between the assays used by the different laboratories to estimate the IgG avidity. Our experience suggests that avidity tests are sometimes less robust than conventional ELISAs. Therefore, it is essential that they are carefully standardised and their performance continuously evaluated.
Resumo:
There is no accepted way of measuring prothrombin time without time loss for patients undergoing major surgery who are at risk of intraoperative dilution and consumption coagulopathy due to bleeding and volume replacement with crystalloids or colloids. Decisions to transfuse fresh frozen plasma and procoagulatory drugs have to rely on clinical judgment in these situations. Point-of-care devices are considerably faster than the standard laboratory methods. In this study we assessed the accuracy of a Point-of-care (PoC) device measuring prothrombin time compared to the standard laboratory method. Patients undergoing major surgery and intensive care unit patients were included. PoC prothrombin time was measured by CoaguChek XS Plus (Roche Diagnostics, Switzerland). PoC and reference tests were performed independently and interpreted under blinded conditions. Using a cut-off prothrombin time of 50%, we calculated diagnostic accuracy measures, plotted a receiver operating characteristic (ROC) curve and tested for equivalence between the two methods. PoC sensitivity and specificity were 95% (95% CI 77%, 100%) and 95% (95% CI 91%, 98%) respectively. The negative likelihood ratio was 0.05 (95% CI 0.01, 0.32). The positive likelihood ratio was 19.57 (95% CI 10.62, 36.06). The area under the ROC curve was 0.988. Equivalence between the two methods was confirmed. CoaguChek XS Plus is a rapid and highly accurate test compared with the reference test. These findings suggest that PoC testing will be useful for monitoring intraoperative prothrombin time when coagulopathy is suspected. It could lead to a more rational use of expensive and limited blood bank resources.
Resumo:
Recently, screening tests for monitoring the prevalence of transmissible spongiform encephalopathies specifically in sheep and goats became available. Although most countries require comprehensive test validation prior to approval, little is known about their performance under normal operating conditions. Switzerland was one of the first countries to implement 2 of these tests, an enzyme-linked immunosorbent assay (ELISA) and a Western blot, in a 1-year active surveillance program. Slaughtered animals (n = 32,777) were analyzed in either of the 2 tests with immunohistochemistry for confirmation of initial reactive results, and fallen stock samples (n = 3,193) were subjected to both screening tests and immunohistochemistry in parallel. Initial reactive and false-positive rates were recorded over time. Both tests revealed an excellent diagnostic specificity (>99.5%). However, initial reactive rates were elevated at the beginning of the program but dropped to levels below 1% with routine and enhanced staff training. Only those in the ELISA increased again in the second half of the program and correlated with the degree of tissue autolysis in the fallen stock samples. It is noteworthy that the Western blot missed 1 of the 3 atypical scrapie cases in the fallen stock, indicating potential differences in the diagnostic sensitivities between the 2 screening tests. However, an estimation of the diagnostic sensitivity for both tests on field samples remained difficult due to the low disease prevalence. Taken together, these results highlight the importance of staff training, sample quality, and interlaboratory comparison trials when such screening tests are implemented in the field.
Resumo:
Bovine besnoitiosis is considered an emerging chronic and debilitating disease in Europe. Many infections remain subclinical, and the only sign of disease is the presence of parasitic cysts in the sclera and conjunctiva. Serological tests are useful for detecting asymptomatic cattle/sub-clinical infections for control purposes, as there are no effective drugs or vaccines. For this purpose, diagnostic tools need to be further standardized. Thus, the aim of this study was to compare the serological tests available in Europe in a multi-centred study. A coded panel of 241 well-characterized sera from infected and non-infected bovines was provided by all participants (SALUVET-Madrid, FLI-Wusterhausen, ENV-Toulouse, IPB-Berne). The tests evaluated were as follows: an in-house ELISA, three commercial ELISAs (INGEZIM BES 12.BES.K1 INGENASA, PrioCHECK Besnoitia Ab V2.0, ID Screen Besnoitia indirect IDVET), two IFATs and seven Western blot tests (tachyzoite and bradyzoite extracts under reducing and non-reducing conditions). Two different definitions of a gold standard were used: (i) the result of the majority of tests ('Majority of tests') and (ii) the majority of test results plus pre-test information based on clinical signs ('Majority of tests plus pre-test info'). Relative to the gold standard 'Majority of tests', almost 100% sensitivity (Se) and specificity (Sp) were obtained with SALUVET-Madrid and FLI-Wusterhausen tachyzoite- and bradyzoite-based Western blot tests under non-reducing conditions. On the ELISAs, PrioCHECK Besnoitia Ab V2.0 showed 100% Se and 98.8% Sp, whereas ID Screen Besnoitia indirect IDVET showed 97.2% Se and 100% Sp. The in-house ELISA and INGEZIM BES 12.BES.K1 INGENASA showed 97.3% and 97.2% Se; and 94.6% and 93.0% Sp, respectively. IFAT FLI-Wusterhausen performed better than IFAT SALUVET-Madrid, with 100% Se and 95.4% Sp. Relative to the gold standard 'Majority of test plus pre-test info', Sp significantly decreased; this result was expected because of the existence of seronegative animals with clinical signs. All ELISAs performed very well and could be used in epidemiological studies; however, Western blot tests performed better and could be employed as a posteriori tests for control purposes in the case of uncertain results from valuable samples.
Children's performance estimation in mathematics and science tests over a school year: A pilot study
Resumo:
The metacognitve ability to accurately estimate ones performance in a test, is assumed to be of central importance for initializing task-oriented effort. In addition activating adequate problem-solving strategies, and engaging in efficient error detection and correction. Although school children's' ability to estimate their own performance has been widely investigated, this was mostly done under highly-controlled, experimental set-ups including only one single test occasion. Method: The aim of this study was to investigate this metacognitive ability in the context of real achievement tests in mathematics. Developed and applied by a teacher of a 5th grade class over the course of a school year these tests allowed the exploration of the variability of performance estimation accuracy as a function of test difficulty. Results: Mean performance estimations were generally close to actual performance with somewhat less variability compared to test performance. When grouping the children into three achievement levels, results revealed higher accuracy of performance estimations in the high achievers compared to the low and average achievers. In order to explore the generalization of these findings, analyses were also conducted for the same children's tests in their science classes revealing a very similar pattern of results compared to the domain of mathematics. Discussion and Conclusion: By and large, the present study, in a natural environment, confirmed previous laboratory findings but also offered additional insights into the generalisation and the test dependency of students' performances estimations.
Resumo:
BACKGROUND While the assessment of analytical precision within medical laboratories has received much attention in scientific enquiry, the degree of as well as the sources causing variation between them remains incompletely understood. In this study, we quantified the variance components when performing coagulation tests with identical analytical platforms in different laboratories and computed intraclass correlations coefficients (ICC) for each coagulation test. METHODS Data from eight laboratories measuring fibrinogen twice in twenty healthy subjects with one out of 3 different platforms and single measurements of prothrombin time (PT), and coagulation factors II, V, VII, VIII, IX, X, XI and XIII were analysed. By platform, the variance components of (i) the subjects, (ii) the laboratory and the technician and (iii) the total variance were obtained for fibrinogen as well as (i) and (iii) for the remaining factors using ANOVA. RESULTS The variability for fibrinogen measurements within a laboratory ranged from 0.02 to 0.04, the variability between laboratories ranged from 0.006 to 0.097. The ICC for fibrinogen ranged from 0.37 to 0.66 and from 0.19 to 0.80 for PT between the platforms. For the remaining factors the ICC's ranged from 0.04 (FII) to 0.93 (FVIII). CONCLUSIONS Variance components that could be attributed to technicians or laboratory procedures were substantial, led to disappointingly low intraclass correlation coefficients for several factors and were pronounced for some of the platforms. Our findings call for sustained efforts to raise the level of standardization of structures and procedures involved in the quantification of coagulation factors.