907 resultados para Serologic tests
Resumo:
To quickly localize defects, we want our attention to be focussed on relevant failing tests. We propose to improve defect localization by exploiting dependencies between tests, using a JUnit extension called JExample. In a case study, a monolithic white-box test suite for a complex algorithm is refactored into two traditional JUnit style tests and to JExample. Of the three refactorings, JExample reports five times fewer defect locations and slightly better performance (-8-12\%), while having similar maintenance characteristics. Compared to the original implementation, JExample greatly improves maintainability due the improved factorization following the accepted test quality guidelines. As such, JExample combines the benefits of test chains with test quality aspects of JUnit style testing.
Resumo:
Recurrent airway obstruction (RAO) is a common condition in stabled horses characterised by small airway inflammation, airway neutrophilia and obstruction following exposure of susceptible horses to mouldy hay and straw and is thus regarded as a hypersensitivity reaction to mould spores. However, the role of IgE-mediated reactions in RAO remains unclear. The aim of the study was to investigate with a serological IgE ELISA test (Allercept), an in vitro sulfidoleukotriene (sLT) release assay (CAST) and with intradermal testing (IDT) whether serum IgE and IgE-mediated reactions against various mould, mite and pollen extracts are associated with RAO. IDT reactions were evaluated at different times in order to detect IgE-mediated immediate type reactions (type I hypersensitivity reactions, 0.5-1 h), immune complex-mediated late type reactions (type III reactions, 4-10 h) and cell-mediated delayed type reactions (type IV hypersensitivity reactions 24-48 h). In the serological test, overall the control horses displayed more positive reactions than the RAO-affected horses but the difference was not significant. Comparison of the measured IgE levels showed that the RAO-affected horses had slightly higher IgE levels against Aspergillus fumigatus than controls (35 and 16 AU, respectively, p<0.05), but all values were below the cut off (150 AU) of the test. In the sLT release assay, seven positive reactions were observed in the RAO-affected horses and four in the controls but this difference was not significant. A significantly higher proportion of late type IDT reactions was observed in RAO-affected horses compared to controls (25 of 238 possible reactions versus 12 of 238 possible reactions, respectively, p<0.05). Interestingly, four RAO-affected but none of the control horses reacted with the recombinant mould allergen A. fumigatus 8 (rAsp f 8, p<0.05), but only late phase and delayed type reactions were observed. In all three tests the majority of the positive reactions was observed with the mite extracts (64%, 74% and 88% of all positive reactions, respectively) but none of the tests showed a significant difference between RAO-affected and control animals. Our findings do not support that IgE-mediated reactions are important in the pathogenesis of RAO. Further studies are needed to investigate whether sensitisation to mite allergens is of clinical relevance in the horse and to understand the role of immune reactions against rAsp f 8.
Resumo:
Recently, screening tests for monitoring the prevalence of transmissible spongiform encephalopathies specifically in sheep and goats became available. Although most countries require comprehensive test validation prior to approval, little is known about their performance under normal operating conditions. Switzerland was one of the first countries to implement 2 of these tests, an enzyme-linked immunosorbent assay (ELISA) and a Western blot, in a 1-year active surveillance program. Slaughtered animals (n = 32,777) were analyzed in either of the 2 tests with immunohistochemistry for confirmation of initial reactive results, and fallen stock samples (n = 3,193) were subjected to both screening tests and immunohistochemistry in parallel. Initial reactive and false-positive rates were recorded over time. Both tests revealed an excellent diagnostic specificity (>99.5%). However, initial reactive rates were elevated at the beginning of the program but dropped to levels below 1% with routine and enhanced staff training. Only those in the ELISA increased again in the second half of the program and correlated with the degree of tissue autolysis in the fallen stock samples. It is noteworthy that the Western blot missed 1 of the 3 atypical scrapie cases in the fallen stock, indicating potential differences in the diagnostic sensitivities between the 2 screening tests. However, an estimation of the diagnostic sensitivity for both tests on field samples remained difficult due to the low disease prevalence. Taken together, these results highlight the importance of staff training, sample quality, and interlaboratory comparison trials when such screening tests are implemented in the field.
Resumo:
A retrospective study of 2,146 feedlot cattle in 17 feedlot tests from 1988 to 1997 was conducted to determine the impact of bovine respiratory disease (BRD) on veterinary treatment costs, average daily gain, carcass traits, mortality, and net profit. Morbidity caused by BRD was 20.6%. The average cost to treat each case of BRD was $12.39. Mortality rate of calves diagnosed and treated for BRD was 5.9% vs. .35% for those not diagnosed with BRD. Average daily gain differed between treated and non-treated steers during the first 28 days on feed but did not differ from 28 days to harvest. Net profit was $57.48 lower for treated steers. Eighty-two percent of this difference was due to a combination of mortality and treatment costs. Eighteen percent of the net profit difference was due to improved performance and carcass value of the non-treated steers. Data from 496 steers and heifers in nine feedlot tests were used to determine the effects of age, weaning, and use of modified live virus or killed vaccines prior to the test to predict BRD. Younger calves, non-weaned calves, and calves vaccinated with killed vaccines prior to the test had higher BRD morbidity than those that were older, weaned, or vaccinated with modified live virus vaccines, respectively. Treatment regimes that precluded relapse resulting in re-treatment prevented reduced performance and loss of carcass value. Using modified live virus vaccines and weaning calves 30 days prior to shipment reduced the incidence of BRD.
Resumo:
Following field observations of wild Agassiz's desert tortoises (Gopherus agassizii) with oral lesions similar to those seen in captive tortoises with herpesvirus infection, we measured the prevalence of antibodies to Testudinid herpesvirus (TeHV) 3 in wild populations of desert tortoises in California. The survey revealed 30.9% antibody prevalence. In 2009 and 2010, two wild adult male desert tortoises, with gross lesions consistent with trauma and puncture wounds, respectively, were necropsied. Tortoise 1 was from the central Mojave Desert and tortoise 2 was from the northeastern Mojave Desert. We extracted DNA from the tongue of tortoise 1 and from the tongue and nasal mucosa of tortoise 2. Sequencing of polymerase chain reaction products of the herpesviral DNA-dependent DNA polymerase gene and the UL39 gene respectively showed 100% nucleotide identity with TeHV2, which was previously detected in an ill captive desert tortoise in California. Although several cases of herpesvirus infection have been described in captive desert tortoises, our findings represent the first conclusive molecular evidence of TeHV2 infection in wild desert tortoises. The serologic findings support cross-reactivity between TeHV2 and TeHV3. Further studies to determine the ecology, prevalence, and clinical significance of this virus in tortoise populations are needed.
Resumo:
BACKGROUND Infectious diseases after solid organ transplantation (SOT) are one of the major complications in transplantation medicine. Vaccination-based prevention is desirable, but data on the response to active vaccination after SOT are conflicting. METHODS In this systematic review, we identify the serologic response rate of SOT recipients to post-transplantation vaccination against tetanus, diphtheria, polio, hepatitis A and B, influenza, Streptococcus pneumoniae, Haemophilus influenzae, Neisseria meningitides, tick-borne encephalitis, rabies, varicella, mumps, measles, and rubella. RESULTS Of the 2478 papers initially identified, 72 were included in the final review. The most important findings are that (1) most clinical trials conducted and published over more than 30 years have all been small and highly heterogeneous regarding trial design, patient cohorts selected, patient inclusion criteria, dosing and vaccination schemes, follow up periods and outcomes assessed, (2) the individual vaccines investigated have been studied predominately only in one group of SOT recipients, i.e. tetanus, diphtheria and polio in RTX recipients, hepatitis A exclusively in adult LTX recipients and mumps, measles and rubella in paediatric LTX recipients, (3) SOT recipients mount an immune response which is for most vaccines lower than in healthy controls. The degree to which this response is impaired varies with the type of vaccine, age and organ transplanted and (4) for some vaccines antibodies decline rapidly. CONCLUSION Vaccine-based prevention of infectious diseases is far from satisfactory in SOT recipients. Despite the large number of vaccination studies preformed over the past decades, knowledge on vaccination response is still limited. Even though the protection, which can be achieved in SOT recipients through vaccination, appears encouraging on the basis of available data, current vaccination guidelines and recommendations for post-SOT recipients remain poorly supported by evidence. There is an urgent need to conduct appropriately powered vaccination trials in well-defined SOT recipient cohorts.
Resumo:
BACKGROUND AND OBJECTIVES Quantitative sensory testing (QST) is widely used to investigate peripheral and central sensitization. However, the comparative performance of different QST for diagnostic or prognostic purposes is unclear. We explored the discriminative ability of different quantitative sensory tests in distinguishing between patients with chronic neck pain and pain-free control subjects and ranked these tests according to the extent of their association with pain hypersensitivity. METHODS We performed a case-control study in 40 patients and 300 control subjects. Twenty-six tests, including different modalities of pressure, heat, cold, and electrical stimulation, were used. As measures of discrimination, we estimated receiver operating characteristic curves and likelihood ratios. RESULTS The following quantitative sensory tests displayed the best discriminative value: (1) pressure pain threshold at the site of the most severe neck pain (fitted area under the receiver operating characteristic curve, 0.92), (2) reflex threshold to single electrical stimulation (0.90), (3) pain threshold to single electrical stimulation (0.89), (4) pain threshold to repeated electrical stimulation (0.87), and (5) pressure pain tolerance threshold at the site of the most severe neck pain (0.86). Only the first 3 could be used for both ruling in and out pain hypersensitivity. CONCLUSIONS Pressure stimulation at the site of the most severe pain and parameters of electrical stimulation were the most appropriate QST to distinguish between patients with chronic neck pain and asymptomatic control subjects. These findings may be used to select the tests in future diagnostic and longitudinal prognostic studies on patients with neck pain and to optimize the assessment of localized and spreading sensitization in chronic pain patients.
Resumo:
In this note, we show that an extension of a test for perfect ranking in a balanced ranked set sample given by Li and Balakrishnan (2008) to the multi-cycle case turns out to be equivalent to the test statistic proposed by Frey et al. (2007). This provides an alternative interpretation and motivation for their test statistic.
Resumo:
Discussing new or recently reformed citizenship tests in the USA, Australia, and Canada, this article asks whether they amount to a restrictive turn of new world citizenship, similar to recent developments in Europe. I argue that elements of a restrictive turn are noticeable in Australia and Canada, but only at the level of political rhetoric, not of law and policy, which remain liberal and inclusive. Much like in Europe, the restrictive turn is tantamount to Muslims and Islam moving to the center of the integration debate.
Resumo:
In the past 2 decades, we have observed a rapid increase of infections due to multidrug-resistant Enterobacteriaceae. Regrettably, these isolates possess genes encoding for extended-spectrum β-lactamases (e.g., blaCTX-M, blaTEM, blaSHV) or plasmid-mediated AmpCs (e.g., blaCMY) that confer resistance to last-generation cephalosporins. Furthermore, other resistance traits against quinolones (e.g., mutations in gyrA and parC, qnr elements) and aminoglycosides (e.g., aminoglycosides modifying enzymes and 16S rRNA methylases) are also frequently co-associated. Even more concerning is the rapid increase of Enterobacteriaceae carrying genes conferring resistance to carbapenems (e.g., blaKPC, blaNDM). Therefore, the spread of these pathogens puts in peril our antibiotic options. Unfortunately, standard microbiological procedures require several days to isolate the responsible pathogen and to provide correct antimicrobial susceptibility test results. This delay impacts the rapid implementation of adequate antimicrobial treatment and infection control countermeasures. Thus, there is emerging interest in the early and more sensitive detection of resistance mechanisms. Modern non-phenotypic tests are promising in this respect, and hence, can influence both clinical outcome and healthcare costs. In this review, we present a summary of the most advanced methods (e.g., next-generation DNA sequencing, multiplex PCRs, real-time PCRs, microarrays, MALDI-TOF MS, and PCR/ESI MS) presently available for the rapid detection of antibiotic resistance genes in Enterobacteriaceae. Taking into account speed, manageability, accuracy, versatility, and costs, the possible settings of application (research, clinic, and epidemiology) of these methods and their superiority against standard phenotypic methods are discussed.
Resumo:
OBJECTIVE To determine the diagnostic value of a serologic microagglutination test (MAT) and a PCR assay on urine and blood for the diagnosis of leptospirosis in dogs with acute kidney injury (AKI). DESIGN Cross-sectional study. Animals-76 dogs with AKI in a referral hospital (2008 to 2009). PROCEDURES Dogs' leptospirosis status was defined with a paired serologic MAT against a panel of 11 Leptospira serovars as leptospirosis-associated (n = 30) or nonleptospirosis-associated AKI (12). In 34 dogs, convalescent serologic testing was not possible, and leptospirosis status was classified as undetermined. The diagnostic value of the MAT single acute or convalescent blood sample was determined in dogs in which leptospirosis status could be classified. The diagnostic value of a commercially available genus-specific PCR assay was evaluated by use of 36 blood samples and 20 urine samples. RESULTS Serologic acute testing of an acute blood sample had a specificity of 100% (95% CI, 76% to 100%), a sensitivity of 50% (33% to 67%), and an accuracy of 64% (49% to 77%). Serologic testing of a convalescent blood sample had a specificity of 92% (65% to 99%), a sensitivity of 100% (87% to 100%), and an accuracy of 98% (88% to 100%). Results of the Leptospira PCR assay were negative for all samples from dogs for which leptospirosis status could be classified. CONCLUSIONS AND CLINICAL RELEVANCE Serologic MAT results were highly accurate for diagnosis of leptospirosis in dogs, despite a low sensitivity for early diagnosis. In this referral setting of dogs pretreated with antimicrobials, testing of blood and urine samples with a commercially available genus-specific PCR assay did not improve early diagnosis.
Children's performance estimation in mathematics and science tests over a school year: A pilot study
Resumo:
The metacognitve ability to accurately estimate ones performance in a test, is assumed to be of central importance for initializing task-oriented effort. In addition activating adequate problem-solving strategies, and engaging in efficient error detection and correction. Although school children's' ability to estimate their own performance has been widely investigated, this was mostly done under highly-controlled, experimental set-ups including only one single test occasion. Method: The aim of this study was to investigate this metacognitive ability in the context of real achievement tests in mathematics. Developed and applied by a teacher of a 5th grade class over the course of a school year these tests allowed the exploration of the variability of performance estimation accuracy as a function of test difficulty. Results: Mean performance estimations were generally close to actual performance with somewhat less variability compared to test performance. When grouping the children into three achievement levels, results revealed higher accuracy of performance estimations in the high achievers compared to the low and average achievers. In order to explore the generalization of these findings, analyses were also conducted for the same children's tests in their science classes revealing a very similar pattern of results compared to the domain of mathematics. Discussion and Conclusion: By and large, the present study, in a natural environment, confirmed previous laboratory findings but also offered additional insights into the generalisation and the test dependency of students' performances estimations.
Resumo:
BACKGROUND Although free eye testing is available in the UK from a nation-wide network of optometrists, there is evidence of unrecognised, tractable vision loss amongst older people. A recent review identified this unmet need as a priority for further investigation, highlighting the need to understand public perceptions of eye services and barriers to service access and utilisation. This paper aims to identify risk factors for (1) having poor vision and (2) not having had an eyesight check among community-dwelling older people without an established ophthalmological diagnosis. METHODS Secondary analysis of self-reported data from the ProAge trial. 1792 people without a known ophthalmological diagnosis were recruited from three group practices in London. RESULTS Almost two in ten people in this population of older individuals without known ophthalmological diagnoses had self-reported vision loss, and more than a third of them had not had an eye test in the previous twelve months. In this sample, those with limited education, depressed mood, need for help with instrumental and basic activities of daily living (IADLs and BADLs), and subjective memory complaints were at increased risk of fair or poor self-reported vision. Individuals with basic education only were at increased risk for not having had an eye test in the previous 12 months (OR 1.52, 95% CI 1.17-1.98 p=0.002), as were those with no, or only one chronic condition (OR 1.850, 95% CI 1.382-2.477, p<0.001). CONCLUSIONS Self-reported poor vision in older people without ophthalmological diagnoses is associated with other functional losses, with no or only one chronic condition, and with depression. This pattern of disorders may be the basis for case finding in general practice. Low educational attainment is an independent determinant of not having had eye tests, as well as a factor associated with undiagnosed vision loss. There are other factors, not identified in this study, which determine uptake of eye testing in those with self-reported vision loss. Further exploration is needed to identify these factors and lead towards effective case finding.