993 resultados para Predictive testing
Resumo:
Species distribution models (SDM) are increasingly used to understand the factors that regulate variation in biodiversity patterns and to help plan conservation strategies. However, these models are rarely validated with independently collected data and it is unclear whether SDM performance is maintained across distinct habitats and for species with different functional traits. Highly mobile species, such as bees, can be particularly challenging to model. Here, we use independent sets of occurrence data collected systematically in several agricultural habitats to test how the predictive performance of SDMs for wild bee species depends on species traits, habitat type, and sampling technique. We used a species distribution modeling approach parametrized for the Netherlands, with presence records from 1990 to 2010 for 193 Dutch wild bees. For each species, we built a Maxent model based on 13 climate and landscape variables. We tested the predictive performance of the SDMs with independent datasets collected from orchards and arable fields across the Netherlands from 2010 to 2013, using transect surveys or pan traps. Model predictive performance depended on species traits and habitat type. Occurrence of bee species specialized in habitat and diet was better predicted than generalist bees. Predictions of habitat suitability were also more precise for habitats that are temporally more stable (orchards) than for habitats that suffer regular alterations (arable), particularly for small, solitary bees. As a conservation tool, SDMs are best suited to modeling rarer, specialist species than more generalist and will work best in long-term stable habitats. The variability of complex, short-term habitats is difficult to capture in such models and historical land use generally has low thematic resolution. To improve SDMs’ usefulness, models require explanatory variables and collection data that include detailed landscape characteristics, for example, variability of crops and flower availability. Additionally, testing SDMs with field surveys should involve multiple collection techniques.
Resumo:
1. Although population viability analysis (PVA) is widely employed, forecasts from PVA models are rarely tested. This study in a fragmented forest in southern Australia contrasted field data on patch occupancy and abundance for the arboreal marsupial greater glider Petauroides volans with predictions from a generic spatially explicit PVA model. This work represents one of the first landscape-scale tests of its type. 2. Initially we contrasted field data from a set of eucalypt forest patches totalling 437 ha with a naive null model in which forecasts of patch occupancy were made, assuming no fragmentation effects and based simply on remnant area and measured densities derived from nearby unfragmented forest. The naive null model predicted an average total of approximately 170 greater gliders, considerably greater than the true count (n = 81). 3. Congruence was examined between field data and predictions from PVA under several metapopulation modelling scenarios. The metapopulation models performed better than the naive null model. Logistic regression showed highly significant positive relationships between predicted and actual patch occupancy for the four scenarios (P = 0.001-0.006). When the model-derived probability of patch occupancy was high (0.50-0.75, 0.75-1.00), there was greater congruence between actual patch occupancy and the predicted probability of occupancy. 4. For many patches, probability distribution functions indicated that model predictions for animal abundance in a given patch were not outside those expected by chance. However, for some patches the model either substantially over-predicted or under-predicted actual abundance. Some important processes, such as inter-patch dispersal, that influence the distribution and abundance of the greater glider may not have been adequately modelled. 5. Additional landscape-scale tests of PVA models, on a wider range of species, are required to assess further predictions made using these tools. This will help determine those taxa for which predictions are and are not accurate and give insights for improving models for applied conservation management.
Resumo:
Objectives: (1) To establish test performance measures for Transient Evoked Otoacoustic Emission testing of 6-year-old children in a school setting; (2) To investigate whether Transient Evoked Otoacoustic Emission testing provides a more accurate and effective alternative to a pure tone screening plus tympanometry protocol. Methods: Pure tone screening, tympanometry and transient evoked otoacoustic emission data were collected from 940 subjects (1880 ears), with a mean age of 6.2 years. Subjects were tested in non-sound-treated rooms within 22 schools. Receiver operating characteristics curves along with specificity, sensitivity, accuracy and efficiency values were determined for a variety of transient evoked otoacoustic emission/pure tone screening/tympanometry comparisons. Results: The Transient Evoked Otoacoustic Emission failure rate for the group was 20.3%. The failure rate for pure tone screening was found to be 8.9%, whilst 18.6% of subjects failed a protocol consisting of combined pure tone screening and tympanometry results. In essence, findings from the comparison of overall Transient Evoked Otoacoustic Emission pass/fail with overall pure tone screening pass/fail suggested that use of a modified Rhode Island Hearing Assessment Project criterion would result in a very high probability that a child with a pass result has normal hearing (true negative). However, the hit rate was only moderate. Selection of a signal-to-noise ratio (SNR) criterion set at greater than or equal to 1 dB appeared to provide the best test performance measures for the range of SNR values investigated. Test performance measures generally declined when tympanometry results were included, with the exception of lower false alarm rates and higher positive predictive values. The exclusion of low frequency data from the Transient Evoked Otoacoustic Emission SNR versus pure tone screening analysis resulted in improved performance measures. Conclusions: The present study poses several implications for the clinical implementation of Transient Evoked Otoacoustic Emission screening for entry level school children. Transient Evoked Otoacoustic Emission pass/fail criteria will require revision. The findings of the current investigation offer support to the possible replacement of pure tone screening with Transient Evoked Otoacoustic Emission testing for 6-year-old children. However, they do not suggest the replacement of the pure tone screening plus tympanometry battery. (C) 2001 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
Purpose: To compare microsatellite instability (MSI) testing with immunohistochemical (IHC) detection of hMLH1 and hMSH2 in colorectal cancer. Patients and Methods: Colorectal cancers from 1, 144 patients were assessed for DNA mismatch repair deficiency by two methods: MSI testing and IHC detection of hMLH1 and hMSH2 gene products. High-frequency MSI (MSI-H) was defined as more than 30% instability of at least five markers; low-level MSI (MSI-L) was defined as 1% to 29% of loci unstable. Results: Of 1, 144 tumors tested, 818 showed intact expression of hMLH1 and hMSH2. Of these, 680 were microsatellite stable (MSS), 27 were MSI-H, and 111 were MSI-L. In all, 228 tumors showed absence of hMLH1 expression and 98 showed absence of hMSH2 expression: all were MSI-H. Conclusion: IHC in colorectal tumors for protein products hMLH1 and hMSH2 provides a rapid, cost-effective, sensitive (92.3%), and extremely specific (100%) method for screening for DNA mismatch repair defects. The predictive value of normal IHC for an MSS/MSI-L phenotype was 96.7%, and the predictive value of abnormal IHC was 100% for an MSI-H phenotype. Testing strategies must take into account acceptability of missing some cases of MSI-H tumors if only IHC is performed. (C) 2002 by American Society of Clinical Oncology.
Resumo:
The aim of this study was to assess the variation between neuropathologists in the diagnosis of common dementia syndromes when multiple published protocols are applied. Fourteen out of 18 Australian neuropathologists participated in diagnosing 20 cases (16 cases of dementia, 4 age-matched controls) using consensus diagnostic methods. Diagnostic criteria, clinical synopses and slides from multiple brain regions were sent to participants who were asked for case diagnoses. Diagnostic sensitivity, specificity, predictive value, accuracy and variability were determined using percentage agreement and kappa statistics. Using CERAD criteria, there was a high inter-rater agreement for cases with probable and definite Alzheimer's disease but low agreement for cases with possible Alzheimer's disease. Braak staging and the application of criteria for dementia with Lewy bodies also resulted in high inter-rater agreement. There was poor agreement for the diagnosis of frontotemporal dementia and for identifying small vessel disease. Participants rarely diagnosed more than one disease in any case. To improve efficiency when applying multiple diagnostic criteria, several simplifications were proposed and tested on 5 of the original 210 cases. Inter-rater reliability for the diagnosis of Alzheimer's disease and dementia with Lewy bodies significantly improved. Further development of simple and accurate methods to identify small vessel lesions and diagnose frontotemporal dementia is warranted.
Resumo:
INTRODUCTION: A growing body of evidence shows the prognostic value of oxygen uptake efficiency slope (OUES), a cardiopulmonary exercise test (CPET) parameter derived from the logarithmic relationship between O(2) consumption (VO(2)) and minute ventilation (VE) in patients with chronic heart failure (CHF). OBJECTIVE: To evaluate the prognostic value of a new CPET parameter - peak oxygen uptake efficiency (POUE) - and to compare it with OUES in patients with CHF. METHODS: We prospectively studied 206 consecutive patients with stable CHF due to dilated cardiomyopathy - 153 male, aged 53.3±13.0 years, 35.4% of ischemic etiology, left ventricular ejection fraction 27.7±8.0%, 81.1% in sinus rhythm, 97.1% receiving ACE-Is or ARBs, 78.2% beta-blockers and 60.2% spironolactone - who performed a first maximal symptom-limited treadmill CPET, using the modified Bruce protocol. In 33% of patients an cardioverter-defibrillator (ICD) or cardiac resynchronization therapy device (CRT-D) was implanted during follow-up. Peak VO(2), percentage of predicted peak VO(2), VE/VCO(2) slope, OUES and POUE were analyzed. OUES was calculated using the formula VO(2) (l/min) = OUES (log(10)VE) + b. POUE was calculated as pVO(2) (l/min) / log(10)peakVE (l/min). Correlation coefficients between the studied parameters were obtained. The prognosis of each variable adjusted for age was evaluated through Cox proportional hazard models and R2 percent (R2%) and V index (V6) were used as measures of the predictive accuracy of events of each of these variables. Receiver operating characteristic (ROC) curves from logistic regression models were used to determine the cut-offs for OUES and POUE. RESULTS: pVO(2): 20.5±5.9; percentage of predicted peak VO(2): 68.6±18.2; VE/VCO(2) slope: 30.6±8.3; OUES: 1.85±0.61; POUE: 0.88±0.27. During a mean follow-up of 33.1±14.8 months, 45 (21.8%) patients died, 10 (4.9%) underwent urgent heart transplantation and in three patients (1.5%) a left ventricular assist device was implanted. All variables proved to be independent predictors of this combined event; however, VE/VCO2 slope was most strongly associated with events (HR 11.14). In this population, POUE was associated with a higher risk of events than OUES (HR 9.61 vs. 7.01), and was also a better predictor of events (R2: 28.91 vs. 22.37). CONCLUSION: POUE was more strongly associated with death, urgent heart transplantation and implantation of a left ventricular assist device and proved to be a better predictor of events than OUES. These results suggest that this new parameter can increase the prognostic value of CPET in patients with CHF.
Resumo:
The investigation of unexplained syncope remains a challenging clinical problem. In the present study we sought to evaluate the diagnostic value of a standardized work-up focusing on non invasive tests in patients with unexplained syncope referred to a syncope clinic, and whether certain combinations of clinical parameters are characteristic of rhythmic and reflex causes of syncope. METHODS AND RESULTS: 317 consecutive patients underwent a standardized work-up including a 12-lead ECG, physical examination, detailed history with screening for syncope-related symptoms using a structured questionnaire followed by carotid sinus massage (CSM), and head-up tilt test. Invasive testings including an electrophysiological study and implantation of a loop recorder were only performed in those with structural heart disease or traumatic syncope. Our work-up identified an etiology in 81% of the patients. Importantly, three quarters of the causes were established non invasively combining head-up tilt test, CSM and hyperventilation testing. Invasive tests yielded an additional 7% of diagnoses. Logistic analysis identified age and number of significant prodromes as the only predictive factors of rhythmic syncope. The same two factors, in addition to the duration of the ECG P-wave, were also predictive of vasovagal and psychogenic syncope. These factors, optimally combined in predictive models, showed a high negative and a modest positive predictive value. CONCLUSION: A standardized work-up focusing on non invasive tests allows to establish more than three quarters of syncope causes. Predictive models based on simple clinical parameters may help to distinguish between rhythmic and other causes of syncope
Resumo:
Background: ln Switzerland no HIV test is performed without the patient's consent based on a Voluntary Counseling and Testing policy (VCT). We hypothesized that a substantial proportion of patients going through an elective surgery falsely believed that an HIV test was performed on a routine basis and that the lack of transmission of result was interpreted as being HIV negative. Method: All patients with elective orthopedic surgery during 2007 were contacted by phone in 2008. A structured questionnaire assessed their belief about routine preoperative blood analysis (diabetes, coagulation function, HIV test and cholesterol level) as well as result awareness and interpretation. Variables included age and gender. Analysis were conducted using the software JMP 6.0.3. Results: 1123 patients were included. 130 (12 %) were excluded (Le. unreachable, unable to communicate on the phone, not operated). 993 completed the survey (89 %). Median age was 51 (16-79). 50 % were female. 376 (38 %) patients thought they had an HIV test performed before surgery but none of them had one. 298 (79 %) interpreted the absence of result as a negative HIV test. A predictive factor to believe an HIV test had been done was an age below 50 years old (45 % vs 33 % for 16-49 years old and 50-79 years old respectively, p < 0.001). No difference was observed between genders. Conclusion: ln Switzerland, nearly 40 % of the patients falsely thought an HIV test had been performed on a routine basis before surgery and were erroneously reassured about their HIV status. These results should either improve the information given to the patient regarding preoperative exams, or motivate public health policy to consider HIV opt-out screening instead of VCT strategy.
Resumo:
There are no validated criteria for the diagnosis of sensory neuronopathy (SNN) yet. In a preliminary monocenter study a set of criteria relying on clinical and electrophysiological data showed good sensitivity and specificity for a diagnosis of probable SNN. The aim of this study was to test these criteria on a French multicenter study. 210 patients with sensory neuropathies from 15 francophone reference centers for neuromuscular diseases were included in the study with an expert diagnosis of non-SNN, SNN or suspected SNN according to the investigations performed in these centers. Diagnosis was obtained independently from the set of criteria to be tested. The expert diagnosis was taken as the reference against which the proposed SNN criteria were tested. The set relied on clinical and electrophysiological data easily obtainable with routine investigations. 9/61 (16.4 %) of non-SNN patients, 23/36 (63.9 %) of suspected SNN, and 102/113 (90.3 %) of SNN patients according to the expert diagnosis were classified as SNN by the criteria. The SNN criteria tested against the expert diagnosis in the SNN and non-SNN groups had 90.3 % (102/113) sensitivity, 85.2 % (52/61) specificity, 91.9 % (102/111) positive predictive value, and 82.5 % (52/63) negative predictive value. Discordance between the expert diagnosis and the SNN criteria occurred in 20 cases. After analysis of these cases, 11 could be reallocated to a correct diagnosis in accordance with the SNN criteria. The proposed criteria may be useful for the diagnosis of probable SNN in patients with sensory neuropathy. They can be reached with simple clinical and paraclinical investigations.
Resumo:
The aim of this study was to investigate the performance of a new and accurate method for the detection of isoniazid (INH) and rifampicin (RIF) resistance among Mycobacterium tuberculosis isolates using a crystal violet decolourisation assay (CVDA). Fifty-five M. tuberculosis isolates obtained from culture stocks stored at -80ºC were tested. After bacterial inoculation, the samples were incubated at 37ºC for seven days and 100 µL of CV (25 mg/L stock solution) was then added to the control and sample tubes. The tubes were incubated for an additional 24-48 h. CV (blue/purple) was decolourised in the presence of bacterial growth; thus, if CV lost its colour in a sample containing a drug, the tested isolate was reported as resistant. The sensitivity, specificity, positive predictive value, negative predictive value and agreement for INH were 92.5%, 96.4%, 96.1%, 93.1% and 94.5%, respectively, and 88.8%, 100%, 100%, 94.8% and 96.3%, respectively, for RIF. The results were obtained within eight-nine days. This study shows that CVDA is an effective method to detect M. tuberculosis resistance to INH and RIF in developing countries. This method is rapid, simple and inexpensive. Nonetheless, further studies are necessary before routine laboratory implementation.
Resumo:
This study investigated the rate of human papillomavirus (HPV) persistence, associated risk factors, and predictors of cytological alteration outcomes in a cohort of human immunodeficiency virus-infected pregnant women over an 18-month period. HPV was typed through L1 gene sequencing in cervical smears collected during gestation and at 12 months after delivery. Outcomes were defined as nonpersistence (clearance of the HPV in the 2nd sample), re-infection (detection of different types of HPV in the 2 samples), and type-specific HPV persistence (the same HPV type found in both samples). An unfavourable cytological outcome was considered when the second exam showed progression to squamous intraepithelial lesion or high squamous intraepithelial lesion. Ninety patients were studied. HPV DNA persistence occurred in 50% of the cases composed of type-specific persistence (30%) or re-infection (20%). A low CD4+T-cell count at entry was a risk factor for type-specific, re-infection, or HPV DNA persistence. The odds ratio (OR) was almost three times higher in the type-specific group when compared with the re-infection group (OR = 2.8; 95% confidence interval: 0.43-22.79). Our findings show that bonafide (type-specific) HPV persistence is a stronger predictor for the development of cytological abnormalities, highlighting the need for HPV typing as opposed to HPV DNA testing in the clinical setting.
Resumo:
The McIsaac scoring system is a tool designed to predict the probability of streptococcal pharyngitis in children aged 3 to 17 years with a sore throat. Although it does not allow the physician to make the diagnosis of streptococcal pharyngitis, it enables to identify those children with a sore throat in whom rapid antigen detection tests have a good predictive value.
Resumo:
BACKGROUND: Replicative phenotypic HIV resistance testing (rPRT) uses recombinant infectious virus to measure viral replication in the presence of antiretroviral drugs. Due to its high sensitivity of detection of viral minorities and its dissecting power for complex viral resistance patterns and mixed virus populations rPRT might help to improve HIV resistance diagnostics, particularly for patients with multiple drug failures. The aim was to investigate whether the addition of rPRT to genotypic resistance testing (GRT) compared to GRT alone is beneficial for obtaining a virological response in heavily pre-treated HIV-infected patients. METHODS: Patients with resistance tests between 2002 and 2006 were followed within the Swiss HIV Cohort Study (SHCS). We assessed patients' virological success after their antiretroviral therapy was switched following resistance testing. Multilevel logistic regression models with SHCS centre as a random effect were used to investigate the association between the type of resistance test and virological response (HIV-1 RNA <50 copies/mL or ≥1.5 log reduction). RESULTS: Of 1158 individuals with resistance tests 221 with GRT+rPRT and 937 with GRT were eligible for analysis. Overall virological response rates were 85.1% for GRT+rPRT and 81.4% for GRT. In the subgroup of patients with >2 previous failures, the odds ratio (OR) for virological response of GRT+rPRT compared to GRT was 1.45 (95% CI 1.00-2.09). Multivariate analyses indicate a significant improvement with GRT+rPRT compared to GRT alone (OR 1.68, 95% CI 1.31-2.15). CONCLUSIONS: In heavily pre-treated patients rPRT-based resistance information adds benefit, contributing to a higher rate of treatment success.
Resumo:
We investigated the association of trabecular bone score (TBS) with microarchitecture and mechanical behavior of human lumbar vertebrae. We found that TBS reflects vertebral trabecular microarchitecture and is an independent predictor of vertebral mechanics. However, the addition of TBS to areal BMD (aBMD) did not significantly improve prediction of vertebral strength. INTRODUCTION: The trabecular bone score (TBS) is a gray-level measure of texture using a modified experimental variogram which can be extracted from dual-energy X-ray absorptiometry (DXA) images. The current study aimed to confirm whether TBS is associated with trabecular microarchitecture and mechanics of human lumbar vertebrae, and if its combination with BMD improves prediction of fracture risk. METHODS: Lumbar vertebrae (L3) were harvested fresh from 16 donors. The anteroposterior and lateral bone mineral content (BMC) and areal BMD (aBMD) of the vertebral body were measured using DXA; then, the TBS was extracted using TBS iNsight software (Medimaps SA, France). The trabecular bone volume (Tb.BV/tissue volume, TV), trabecular thickness (Tb.Th), degree of anisotropy, and structure model index (SMI) were measured using microcomputed tomography. Quasi-static uniaxial compressive testing was performed on L3 vertebral bodies to assess failure load and stiffness. RESULTS: The TBS was significantly correlated to Tb.BV/TV and SMI (râeuro0/00=âeuro0/000.58 and -0.62; pâeuro0/00=âeuro0/000.02, 0.01), but not related to BMC and BMD. TBS was significantly correlated with stiffness (râeuro0/00=âeuro0/000.64; pâeuro0/00=âeuro0/000.007), independently of bone mass. Using stepwise multiple regression models, we failed to demonstrate that the combination of BMD and TBS was better at explaining mechanical behavior than either variable alone. However, the combination TBS, Tb.Th, and BMC did perform better than each parameter alone, explaining 79Â % of the variability in stiffness. CONCLUSIONS: In our study, TBS was associated with microarchitecture parameters and with vertebral mechanical behavior, but TBS did not improve prediction of vertebral biomechanical properties in addition to aBMD.
Resumo:
There is little information concerning the long term outcome of patients with gastro-oesophageal reflux disease (GORD). Thus 109 patients with reflux symptoms (33 with erosive oesophagitis) with a diagnosis of GORD after clinical evaluation and oesophageal testing were studied. All patients were treated with a stepwise approach: (a) lifestyle changes were suggested aimed at reducing reflux and antacids and the prokinetic agent domperidone were prescribed; (b) H2 blockers were added after two months when symptoms persisted; (c) anti-reflux surgery was indicated when there was no response to (b). Treatment was adjusted to maintain clinical remission during follow up. Long term treatment need was defined as minor when conservative measures sufficed for proper control, and as major if daily H2 blockers or surgery were required. The results showed that one third of the patients each had initial therapeutic need (a), (b), and (c). Of 103 patients available for follow up at three years and 89 at six years, respective therapeutic needs were minor in 52% and 55% and major in 48% and 45%. Eighty per cent of patients in (a), 67% in (b), and 17% in (c) required only conservative measures at six years. A decreasing lower oesophageal sphincter pressure (p < 0.001), radiological reflux (p = 0.028), and erosive oesophagitis (p = 0.031), but not initial clinical scores, were independent predictors of major therapeutic need as shown by multivariate analysis. The long term outcome of GORD is better than previously perceived.