998 resultados para SCORE TESTS
Resumo:
Objective: We used demographic and clinical data to design practical classification models for prediction of neurocognitive impairment (NCI) in people with HIV infection. Methods: The study population comprised 331 HIV-infected patients with available demographic, clinical, and neurocognitive data collected using a comprehensive battery of neuropsychological tests. Classification and regression trees (CART) were developed to btain detailed and reliable models to predict NCI. Following a practical clinical approach, NCI was considered the main variable for study outcomes, and analyses were performed separately in treatment-naïve and treatment-experienced patients. Results: The study sample comprised 52 treatment-naïve and 279 experienced patients. In the first group, the variables identified as better predictors of NCI were CD4 cell count and age (correct classification [CC]: 79.6%, 3 final nodes). In treatment-experienced patients, the variables most closely related to NCI were years of education, nadir CD4 cell count, central nervous system penetration-effectiveness score, age, employment status, and confounding comorbidities (CC: 82.1%, 7 final nodes). In patients with an undetectable viral load and no comorbidities, we obtained a fairly accurate model in which the main variables were nadir CD4 cell count, current CD4 cell count, time on current treatment, and past highest viral load (CC: 88%, 6 final nodes). Conclusion: Practical classification models to predict NCI in HIV infection can be obtained using demographic and clinical variables. An approach based on CART analyses may facilitate screening for HIV-associated neurocognitive disorders and complement clinical information about risk and protective factors for NCI in HIV-infected patients.
Resumo:
BACKGROUND: No studies have identified which patients with upper-extremity deep vein thrombosis (DVT) are at low risk for adverse events within the first week of therapy. METHODS: We used data from Registro Informatizado de la Enfermedad TromboEmbólica to explore in patients with upper-extremity DVT a prognostic score that correctly identified patients with lower limb DVT at low risk for pulmonary embolism, major bleeding, or death within the first week. RESULTS: As of December 2014, 1135 outpatients with upper-extremity DVT were recruited. Of these, 515 (45%) were treated at home. During the first week, three patients (0.26%) experienced pulmonary embolism, two (0.18%) had major bleeding, and four (0.35%) died. We assigned 1 point to patients with chronic heart failure, creatinine clearance levels 30-60 mL min(-1) , recent bleeding, abnormal platelet count, recent immobility, or cancer without metastases; 2 points to those with metastatic cancer; and 3 points to those with creatinine clearance levels < 30 mL min(-1) . Overall, 759 (67%) patients scored ≤ 1 point and were considered to be at low risk. The rate of the composite outcome within the first week was 0.26% (95% confidence interval [CI] 0.004-0.87) in patients at low risk and 1.86% (95% CI 0.81-3.68) in the remaining patients. C-statistics was 0.73 (95% CI 0.57-0.88). Net reclassification improvement was 22%, and integrated discrimination improvement was 0.0055. CONCLUSIONS: Using six easily available variables, we identified outpatients with upper-extremity DVT at low risk for adverse events within the first week. These data may help to safely treat more patients at home.
Resumo:
BACKGROUND: In Switzerland, patients may undergo "blood tests" without being informed what these are screening for. Inadequate doctor-patient communication may result in patient misunderstanding. We examined what patients in the emergency department (ED) believed they had been screened for and explored their attitudes to routine (non-targeted) human immunodeficiency virus (HIV) screening. METHODS: Between 1st October 2012 and 28th February 2013, a questionnaire-based survey was conducted among patients aged 16-70 years old presenting to the ED of Lausanne University Hospital. Patients were asked: (1) if they believed they had been screened for HIV; (2) if they agreed in principle to routine HIV screening and (3) if they agreed to be HIV tested during their current ED visit. RESULTS: Of 466 eligible patients, 411 (88%) agreed to participate. Mean age was 46 ± 16 years; 192 patients (47%) were women; 366 (89%) were Swiss or European; 113 (27%) believed they had been screened for HIV, the proportion increasing with age (p ≤0.01), 297 (72%) agreed in principle with routine HIV testing in the ED, and 138 patients (34%) agreed to be HIV tested during their current ED visit. CONCLUSION: In this ED population, 27% believed incorrectly they had been screened for HIV. Over 70% agreed in principle with routine HIV testing and 34% agreed to be tested during their current visit. These results demonstrate willingness among patients concerning routine HIV testing in the ED and highlight a need for improved doctor-patient communication about what a blood test specifically screens for.
Resumo:
In a cohort study of 182 consecutive patients with active endogenous Cushing's syndrome, the only predictor of fracture occurrence after adjustment for age, gender bone mineral density (BMD) and trabecular bone score (TBS) was 24-h urinary free cortisol (24hUFC) levels with a threshold of 1472 nmol/24 h (odds ratio, 3.00 (95 % confidence interval (CI), 1.52-5.92); p = 0.002). INTRODUCTION: The aim was to estimate the risk factors for fracture in subjects with endogenous Cushing's syndrome (CS) and to evaluate the value of the TBS in these patients. METHODS: All enrolled patients with CS (n = 182) were interviewed in relation to low-traumatic fractures and underwent lateral X-ray imaging from T4 to L5. BMD measurements were performed using a DXA Prodigy device (GEHC Lunar, Madison, Wisconsin, USA). The TBS was derived retrospectively from existing BMD scans, blinded to clinical outcome, using TBS iNsight software v2.1 (Medimaps, Merignac, France). Urinary free cortisol (24hUFC) was measured by immunochemiluminescence assay (reference range, 60-413 nmol/24 h). RESULTS: Among enrolled patients with CS (149 females; 33 males; mean age, 37.8 years (95 % confidence interval, 34.2-39.1); 24hUFC, 2370 nmol/24 h (2087-2632), fractures were confirmed in 81 (44.5 %) patients, with 70 suffering from vertebral fractures, which were multiple in 53 cases; 24 patients reported non-vertebral fractures. The mean spine TBS was 1.207 (1.187-1.228), and TBS Z-score was -1.86 (-2.07 to -1.65); area under the curve (AUC) was used to predict fracture (mean spine TBS) = 0.548 (95 % CI, 0.454-0.641)). In the final regression model, the only predictor of fracture occurrence was 24hUFC levels (p = 0.001), with an increase of 1.041 (95 % CI, 1.019-1.063), calculated for every 100 nmol/24-h cortisol elevation (AUC (24hUFC) = 0.705 (95 % CI, 0.629-0.782)). CONCLUSIONS: Young patients with CS have a low TBS. However, the only predictor of low traumatic fracture is the severity of the disease itself, indicated by high 24hUFC levels.
Resumo:
BACKGROUND: The Nutritional Risk Score (NRS) is a validated tool to identify patients who should benefit of nutritional interventions. Nutritional screening however has not yet been widely adopted by surgeons. Furthermore, the question about reliability of nutritional assessment performed by surgeons is still unanswered. METHODS: Data was obtained from a recent randomised trial including 146 patients with an NRS ≥3 as assessed by the surgeons. Additional detailed nutritional assessment was performed for all patients by nutritional specialists and entered prospectively in a dedicated database. In this retrospective, surgeons' scoring of NRS and its components was compared to the assessment by nutritionists (considered as gold standard). RESULTS: Prospective NRS scores by surgeons and nutritionists were available for 141 patients (97%). Surgeons calculated a NRS of 7, 6, 5, 4 and 3 in 2, 8, 38, 21 and 72 patients respectively. Nutritionists calculated a NRS of 6, 5, 4, 3 and 2 in 8, 26, 47, 57, 3 patients, respectively. Surgeons' assessment was entirely correct in 56 patients (40%), while at least the final score was consistent in 63 patients (45%). Surgeons overrated the NRS in 21% of patients and underestimated the score in 29%. Evaluation of the nutritional status showed most of the discrepancies (54%). CONCLUSION: Surgeon's assessment of nutritional status is modest at best. Close collaboration with nutritional specialists should be recommended in order to avoid misdiagnosis and under-treatment of patients at nutritional risk.
Resumo:
AbstractObjective:To evaluate by magnetic resonance imaging changes in bone marrow of patients undergoing treatment for type I Gaucher’s disease.Materials and Methods:Descriptive, cross-sectional study of Gaucher’s disease patients submitted to 3 T magnetic resonance imaging of femurs and lumbar spine. The images were blindly reviewed and the findings were classified according to the semiquantitative bone marrow burden (BMB) scoring system.Results:All of the seven evaluated patients (three men and four women) presented signs of bone marrow infiltration. Osteonecrosis of the femoral head was found in three patients, Erlenmeyer flask deformity in five, and no patient had vertebral body collapse. The mean BMB score was 11, ranging from 9 to 14.Conclusion:Magnetic resonance imaging is currently the method of choice for assessing bone involvement in Gaucher’s disease in adults due to its high sensitivity to detect both focal and diffuse bone marrow changes, and the BMB score is a simplified method for semiquantitative analysis, without depending on advanced sequences or sophisticated hardware, allowing for the classification of the disease extent and assisting in the treatment monitoring.
Resumo:
BACKGROUND: Despite a low positive predictive value, diagnostic tests such as complete blood count (CBC) and C-reactive protein (CRP) are commonly used to evaluate whether infants with risk factors for early-onset neonatal sepsis (EOS) should be treated with antibiotics. STUDY DESIGN: We investigated the impact of imple- menting a protocol aiming at reducing the number of dia- gnostic tests in infants with risk factors for EOS in order to compare the diagnostic performance of repeated clinical examination with CBC and CRP measurement. The primary outcome was the time between birth and the first dose of antibiotics in infants treated for suspected EOS. RESULTS: Among the 11,503 infants born at 35 weeks during the study period, 222 were treated with antibiotics for suspected EOS. The proportion of infants receiving an- tibiotics for suspected EOS was 2.1% and 1.7% before and after the change of protocol (p = 0.09). Reduction of dia- gnostic tests was associated with earlier antibiotic treat- ment in infants treated for suspected EOS (hazard ratio 1.58; 95% confidence interval [CI] 1.20-2.07; p <0.001), and in infants with neonatal infection (hazard ratio 2.20; 95% CI 1.19-4.06; p = 0.01). There was no difference in the duration of hospital stay nor in the proportion of infants requiring respiratory or cardiovascular support before and after the change of protocol. CONCLUSION: Reduction of diagnostic tests such as CBC and CRP does not delay initiation of antibiotic treat- ment in infants with suspected EOS. The importance of clinical examination in infants with risk factors for EOS should be emphasised.
Resumo:
UNLABELLED: It is uncertain whether bone mineral density (BMD) can accurately predict fracture in kidney transplant recipients. Trabecular bone score (TBS) provides information independent of BMD. Kidney transplant recipients had abnormal bone texture as measured by lumbar spine TBS, and a lower TBS was associated with incident fractures in recipients. INTRODUCTION: Trabecular bone score (TBS) is a texture measure derived from dual energy X-ray absorptiometry (DXA) lumbar spine images, providing information independent of bone mineral density. We assessed characteristics associated with TBS and fracture outcomes in kidney transplant recipients. METHODS: We included 327 kidney transplant recipients from Manitoba, Canada, who received a post-transplant DXA (median 106 days post-transplant). We matched each kidney transplant recipient (mean age 45 years, 39 % men) to three controls from the general population (matched on age, sex, and DXA date). Lumbar spine (L1-L4) DXA images were used to derive TBS. Non-traumatic incident fracture (excluding hand, foot, and craniofacial) (n = 31) was assessed during a mean follow-up of 6.6 years. We used multivariable linear regression models to test predictors of TBS, and multivariable Cox proportional hazard regression was used to estimate hazard ratios (HRs) per standard deviation decrease in TBS to express the gradient of risk. RESULTS: Compared to the general population, kidney transplant recipients had a significantly lower lumbar spine TBS (1.365 ± 0.129 versus 1.406 ± 0.125, P < 0.001). Multivariable linear regression revealed that receipt of a kidney transplant was associated with a significantly lower mean TBS compared to controls (-0.0369, 95 % confidence interval [95 % CI] -0.0537 to -0.0202). TBS was associated with fractures independent of the Fracture Risk Assessment score including BMD (adjusted HR per standard deviation decrease in TBS 1.64, 95 % CI 1.15-2.36). CONCLUSION: Kidney transplant recipients had abnormal bone texture as assessed by TBS and a lower lumbar spine TBS was associated with fractures in recipients.
Resumo:
Background: Chronic liver diseases (CLDs) are significant causes of death in adults in many countries and are usually diagnosed at late stages. Early detection may allow time for treatment to prevent disease progression. Objectives: The aim of this study was to assess the feasibility of screening for unrecognized CLDs in a primary care nurse consultancy and report findings from screening. Methods: Two experienced nurses in a primary care nurse consultancy were trained to perform transient elastography (TE). Subjects aged from 18 to 70 years were identified randomly from the health registry and invited to participate in a feasibility pilot study. Exclusion criteria were past or current history of liver diseases. Nurses collected demographic and clinical data and performed TE tests using Fibroscan tomeasure liver stiffness; a cutoff score of 6.8 kPa or greater was used as an indicator of the presence of CLD with fibrosis. Results: Accurate measurements were obtained in 495 of 502 participants (98.6%). Prevalence of elevated liver stiffness was observed in 28 of 495 subjects (5.7%). Compared to patients with normal liver stiffness, patients with increased liver stiffness were older, were more frequently male, and had higher frequency of metabolic syndrome. Nonalcoholic fatty liver was the most common cause of CLD. Discussion: Following training in procedures for conducting TE, nurses in a primary care clinic were able to detect unrecognized CLDs in presumably healthy subjects. Early detection of CLDs is feasible in primary care clinics and may facilitate identification of undiagnosed CLD in adults.
Resumo:
Trabecular bone score (TBS) is a gray-level textural index of bone microarchitecture derived from lumbar spine dual-energy X-ray absorptiometry (DXA) images. TBS is a bone mineral density (BMD)-independent predictor of fracture risk. The objective of this meta-analysis was to determine whether TBS predicted fracture risk independently of FRAX probability and to examine their combined performance by adjusting the FRAX probability for TBS. We utilized individual-level data from 17,809 men and women in 14 prospective population-based cohorts. Baseline evaluation included TBS and the FRAX risk variables, and outcomes during follow-up (mean 6.7 years) comprised major osteoporotic fractures. The association between TBS, FRAX probabilities, and the risk of fracture was examined using an extension of the Poisson regression model in each cohort and for each sex and expressed as the gradient of risk (GR; hazard ratio per 1 SD change in risk variable in direction of increased risk). FRAX probabilities were adjusted for TBS using an adjustment factor derived from an independent cohort (the Manitoba Bone Density Cohort). Overall, the GR of TBS for major osteoporotic fracture was 1.44 (95% confidence interval [CI] 1.35-1.53) when adjusted for age and time since baseline and was similar in men and women (p > 0.10). When additionally adjusted for FRAX 10-year probability of major osteoporotic fracture, TBS remained a significant, independent predictor for fracture (GR = 1.32, 95% CI 1.24-1.41). The adjustment of FRAX probability for TBS resulted in a small increase in the GR (1.76, 95% CI 1.65-1.87 versus 1.70, 95% CI 1.60-1.81). A smaller change in GR for hip fracture was observed (FRAX hip fracture probability GR 2.25 vs. 2.22). TBS is a significant predictor of fracture risk independently of FRAX. The findings support the use of TBS as a potential adjustment for FRAX probability, though the impact of the adjustment remains to be determined in the context of clinical assessment guidelines. © 2015 American Society for Bone and Mineral Research.
Resumo:
Bodily injury claims have the greatest impact on the claim costs of motor insurance companies. The disability severity of motor claims is assessed in numerous European countries by means of score systems. In this paper a zero inflated generalized Poisson regression model is implemented to estimate the disability severity score of victims in-volved in motor accidents on Spanish roads. We show that the injury severity estimates may be automatically converted into financial terms by insurers at any point of the claim handling process. As such, the methodology described may be used by motor insurers operating in the Spanish market to monitor the size of bodily injury claims. By using insurance data, various applications are presented in which the score estimate of disability severity is of value to insurers, either for computing the claim compensation or for claim reserve purposes.
Resumo:
This study evaluated the performance of the Tuberculin Skin Test (TST) and Quantiferon-TB Gold in-Tube (QFT) and the possible association of factors which may modify their results in young children (0-6 years) with recent contact with an index tuberculosis case. Materials and Methods: A cross-sectional study including 135 children was conducted in Manaus, Amazonas-Brazil. The TST and QFT were performed and the tests results were analyzed in relation to the personal characteristics of the children studied and their relationship with the index case. Results: The rates of positivity were 34.8% (TST) and 26.7% (QFT), with 14.1% of indeterminations by the QFT. Concordance between tests was fair (Kappa = 0.35 P<0.001). Both the TST and QFT were associated with the intensity of exposure (Linear OR = 1.286, P = 0.005; Linear OR = 1.161, P = 0.035 respectively) with only the TST being associated with the time of exposure (Linear OR = 1.149, P = 0.009). The presence of intestinal helminths in the TST+ group was associated with negative QFT results (OR = 0.064, P = 0.049). In the TST- group lower levels of ferritin were associated with QFT+ results (Linear OR = 0.956, P = 0.036). Conclusions: Concordance between the TST and QFT was lower than expected. The factors associated with the discordant results were intestinal helminths, ferritin levels and exposure time to the index tuberculosis case. In TST+ group, helminths were associated with negative QFT results suggesting impaired cell-mediated immunity. The TST-&QFT+ group had a shorter exposure time and lower ferritin levels, suggesting that QFT is faster and ferritin may be a potential biomarker of early stages of tuberculosis infection.
Resumo:
Genome-wide linkage studies have identified the 9q22 chromosomal region as linked with colorectal cancer (CRC) predisposition. A candidate gene in this region is transforming growth factor beta receptor 1 (TGFBR1). Investigation of TGFBR1 has focused on the common genetic variant rs11466445, a short exonic deletion of nine base pairs which results in truncation of a stretch of nine alanine residues to six alanine residues in the gene product. While the six alanine (*6A) allele has been reported to be associated with increased risk of CRC in some population based study groups this association remains the subject of robust debate. To date, reports have been limited to population-based case-control association studies, or case-control studies of CRC families selecting one affected individual per family. No study has yet taken advantage of all the genetic information provided by multiplex CRC families. Methods: We have tested for an association between rs11466445 and risk of CRC using several family-based statistical tests in a new study group comprising members of non-syndromic high risk CRC families sourced from three familial cancer centres, two in Australia and one in Spain. Results: We report a finding of a nominally significant result using the pedigree-based association test approach (PBAT; p = 0.028), while other family-based tests were non-significant, but with a p-value < 0.10 in each instance. These other tests included the Generalised Disequilibrium Test (GDT; p = 0.085), parent of origin GDT Generalised Disequilibrium Test (GDT-PO; p = 0.081) and empirical Family-Based Association Test (FBAT; p = 0.096, additive model). Related-person case-control testing using the 'More Powerful' Quasi-Likelihood Score Test did not provide any evidence for association (M-QL5; p = 0.41). Conclusions: After conservatively taking into account considerations for multiple hypothesis testing, we find little evidence for an association between the TGFBR1*6A allele and CRC risk in these families. The weak support for an increase in risk in CRC predisposed families is in agreement with recent meta-analyses of case-control studies, which estimate only a modest increase in sporadic CRC risk among 6*A allele carriers.