975 resultados para Clinical validation
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores – Sistemas Digitais e Percepcionais pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
OBJECTIVE: The objective of the study was to develop a model for estimating patient 28-day in-hospital mortality using 2 different statistical approaches. DESIGN: The study was designed to develop an outcome prediction model for 28-day in-hospital mortality using (a) logistic regression with random effects and (b) a multilevel Cox proportional hazards model. SETTING: The study involved 305 intensive care units (ICUs) from the basic Simplified Acute Physiology Score (SAPS) 3 cohort. PATIENTS AND PARTICIPANTS: Patients (n = 17138) were from the SAPS 3 database with follow-up data pertaining to the first 28 days in hospital after ICU admission. INTERVENTIONS: None. MEASUREMENTS AND RESULTS: The database was divided randomly into 5 roughly equal-sized parts (at the ICU level). It was thus possible to run the model-building procedure 5 times, each time taking four fifths of the sample as a development set and the remaining fifth as the validation set. At 28 days after ICU admission, 19.98% of the patients were still in the hospital. Because of the different sampling space and outcome variables, both models presented a better fit in this sample than did the SAPS 3 admission score calibrated to vital status at hospital discharge, both on the general population and in major subgroups. CONCLUSIONS: Both statistical methods can be used to model the 28-day in-hospital mortality better than the SAPS 3 admission model. However, because the logistic regression approach is specifically designed to forecast 28-day mortality, and given the high uncertainty associated with the assumption of the proportionality of risks in the Cox model, the logistic regression approach proved to be superior.
Resumo:
INTRODUCTION AND OBJECTIVES: Recurrent syncope has a significant impact on quality of life. The development of measurement scales to assess this impact that are easy to use in clinical settings is crucial. The objective of the present study is a preliminary validation of the Impact of Syncope on Quality of Life questionnaire for the Portuguese population. METHODS: The instrument underwent a process of translation, validation, analysis of cultural appropriateness and cognitive debriefing. A population of 39 patients with a history of recurrent syncope (>1 year) who underwent tilt testing, aged 52.1 ± 16.4 years (21-83), 43.5% male, most in active employment (n=18) or retired (n=13), constituted a convenience sample. The resulting Portuguese version is similar to the original, with 12 items in a single aggregate score, and underwent statistical validation, with assessment of reliability, validity and stability over time. RESULTS: With regard to reliability, the internal consistency of the scale is 0.9. Assessment of convergent and discriminant validity showed statistically significant results (p<0.01). Regarding stability over time, a test-retest of this instrument at six months after tilt testing with 22 patients of the sample who had not undergone any clinical intervention found no statistically significant changes in quality of life. CONCLUSIONS: The results indicate that this instrument is of value for assessing quality of life in patients with recurrent syncope in Portugal.
Resumo:
Twenty-four hepatitis C virus patients coinfected with human T-lymphotropic virus type 1 were compared with six coinfected with HTLV-2 and 55 with HCV alone, regarding clinical, epidemiological, laboratory and histopathological data. Fischer's discriminant analysis was applied to define functions capable of differentiating between the study groups (HCV, HCV/HTLV-1 and HCV/HTLV-2). The discriminant accuracy was evaluated by cross-validation. Alcohol consumption, use of intravenous drugs or inhaled cocaine and sexual partnership with intravenous drug users were more frequent in the HCV/HTLV-2 group, whereas patients in the HCV group more often reported abdominal pain or a sexual partner with hepatitis. Coinfected patients presented higher platelet counts, but aminotransferase and gamma-glutamyl transpeptidase levels were higher among HCV-infected subjects. No significant difference between the groups was seen regarding liver histopathological findings. Through discriminant analysis, classification functions were defined, including sex, age group, intravenous drug use and sexual partner with hepatitis. Cross-validation revealed high discriminant accuracy for the HCV group.
Resumo:
INTRODUCTION: Leptospirosis is often mistaken for other acute febrile illnesses because of its nonspecific presentation. Bacteriologic, serologic, and molecular methods have several limitations for early diagnosis: technical complexity, low availability, low sensitivity in early disease, or high cost. This study aimed to validate a case definition, based on simple clinical and laboratory tests, that is intended for bedside diagnosis of leptospirosis among hospitalized patients. METHODS: Adult patients, admitted to two reference hospitals in Recife, Brazil, with a febrile illness of less than 21 days and with a clinical suspicion of leptospirosis, were included to test a case definition comprising ten clinical and laboratory criteria. Leptospirosis was confirmed or excluded by a composite reference standard (microscopic agglutination test, ELISA, and blood culture). Test properties were determined for each cutoff number of the criteria from the case definition. RESULTS: Ninety seven patients were included; 75 had confirmed leptospirosis and 22 did not. Mean number of criteria from the case definition that were fulfilled was 7.8±1.2 for confirmed leptospirosis and 5.9±1.5 for non-leptospirosis patients (p<0.0001). Best sensitivity (85.3%) and specificity (68.2%) combination was found with a cutoff of 7 or more criteria, reaching positive and negative predictive values of 90.1% and 57.7%, respectively; accuracy was 81.4%. CONCLUSIONS: The case definition, for a cutoff of at least 7 criteria, reached average sensitivity and specificity, but with a high positive predictive value. Its simplicity and low cost make it useful for rapid bedside leptospirosis diagnosis in Brazilian hospitalized patients with acute severe febrile disease.
Resumo:
Cardiovascular diseases (CVDs) are one of the leading causes of death and disability worldwide and one of its underlying causes is hypercholesterolemia. Hypercholesterolemia can have genetic (familial hypercholesterolemia, FH) and non-genetic causes (clinical hypercholesterolemia, CH), the first much more severe, with occurrence of premature atherosclerosis. While the pathophysiological role of homocysteine (Hcy) on CVD is still controversial, molecular targeting of protein by S and N-homocysteinylation offers a new paradigm to be considered in the vascular pathogenesis of hypercholesterolemia. On this regard, the present study aims to give new insights on protein targeting by Hcy in both CH and FH conditions. A total of 187 subjects were included: 65 normolipidemic and 122 hypercholesterolemic. Total (tHcy) and free (fHcy) fractions were quantified in serum samples after validation of an HPLCFD method, to assess S-homocysteinylation. Also, the lactonase (LACase) activity of paraoxonase-1 (PON1) was quantified by a colorimetric assay, as a surrogate of N-homocysteinylation. tHcy does not differ among groups. Nevertheless, fHcy declines in the hypercholesterolemic groups, with more evidence to the FH population. Consequently, there seems to be an increase of Shomocysteinylation, regardless of lipid lowering therapy (LLT). Also, despite of LLT use, LACase activity is lower in FH, thus the risk for protein N-homocysteinylation seems to be higher. Moreover, the decrease in LACase/ApoA1 and LACase/HDL ratios in FH, shows that HDL is dysfunctional in this population, despite its normal concentration values. Data supports that the pathophysiological role of Hcy on hypercholesterolemia may reside in its ability to post-translationally modify proteins. This role is particularly evident in FH condition. In the future, it will be interesting to identify which target proteins are modified and thus involved in vascular pathology progression.
Resumo:
BACKGROUND To validate a new practical Sepsis Severity Score for patients with complicated intra-abdominal infections (cIAIs) including the clinical conditions at the admission (severe sepsis/septic shock), the origin of the cIAIs, the delay in source control, the setting of acquisition and any risk factors such as age and immunosuppression. METHODS The WISS study (WSES cIAIs Score Study) is a multicenter observational study underwent in 132 medical institutions worldwide during a four-month study period (October 2014-February 2015). Four thousand five hundred thirty-three patients with a mean age of 51.2 years (range 18-99) were enrolled in the WISS study. RESULTS Univariate analysis has shown that all factors that were previously included in the WSES Sepsis Severity Score were highly statistically significant between those who died and those who survived (p < 0.0001). The multivariate logistic regression model was highly significant (p < 0.0001, R2 = 0.54) and showed that all these factors were independent in predicting mortality of sepsis. Receiver Operator Curve has shown that the WSES Severity Sepsis Score had an excellent prediction for mortality. A score above 5.5 was the best predictor of mortality having a sensitivity of 89.2 %, a specificity of 83.5 % and a positive likelihood ratio of 5.4. CONCLUSIONS WSES Sepsis Severity Score for patients with complicated Intra-abdominal infections can be used on global level. It has shown high sensitivity, specificity, and likelihood ratio that may help us in making clinical decisions.
Resumo:
The Experiences in Close Relationships Inventory permits to evaluate attachment in close relationships during adulthood based on two dimensions able to be present in this kind of relationships: the avoidance of proximity and the anxiety related with to abandonment. It is a self-report 7- points likert scale composed by 36 items. The Portuguese version was administered to a sample of 551 university students (60% female), the majority with ages between 19 and 24 years old (88%) in a dating relationship (86%). The principal components analysis with oblimin rotation was performed. The total scale has good internal consistency (α=.86), as also has the 2 sub-scales: anxiety (α=.86) and avoidance (α=.88). The two dimensions evaluated are significantly correlated with socio-demographics, relational characteristics (jealousy, relationship distress, and compromise), wishes (enmeshment versus differentiation) and fears (abandonment versus control) related to attitudes in significant relationships, which testify the construct validity of the instrument. The results obtained are coherent with the original version and other ECR‘s adaptations. Practitioners and researchers in the context of clinical psychology and related areas have now at their disposal the Portuguese version of the ECR inventory, which has shown its very high usefulness in the study of close relationships, and specifically attachment in adulthood.
Resumo:
Tese de Doutoramento em Ciências da Saúde
Resumo:
AbstractBackground:30-40% of cardiac resynchronization therapy cases do not achieve favorable outcomes.Objective:This study aimed to develop predictive models for the combined endpoint of cardiac death and transplantation (Tx) at different stages of cardiac resynchronization therapy (CRT).Methods:Prospective observational study of 116 patients aged 64.8 ± 11.1 years, 68.1% of whom had functional class (FC) III and 31.9% had ambulatory class IV. Clinical, electrocardiographic and echocardiographic variables were assessed by using Cox regression and Kaplan-Meier curves.Results:The cardiac mortality/Tx rate was 16.3% during the follow-up period of 34.0 ± 17.9 months. Prior to implantation, right ventricular dysfunction (RVD), ejection fraction < 25% and use of high doses of diuretics (HDD) increased the risk of cardiac death and Tx by 3.9-, 4.8-, and 5.9-fold, respectively. In the first year after CRT, RVD, HDD and hospitalization due to congestive heart failure increased the risk of death at hazard ratios of 3.5, 5.3, and 12.5, respectively. In the second year after CRT, RVD and FC III/IV were significant risk factors of mortality in the multivariate Cox model. The accuracy rates of the models were 84.6% at preimplantation, 93% in the first year after CRT, and 90.5% in the second year after CRT. The models were validated by bootstrapping.Conclusion:We developed predictive models of cardiac death and Tx at different stages of CRT based on the analysis of simple and easily obtainable clinical and echocardiographic variables. The models showed good accuracy and adjustment, were validated internally, and are useful in the selection, monitoring and counseling of patients indicated for CRT.
Resumo:
BACKGROUND: Clinical scores may help physicians to better assess the individual risk/benefit of oral anticoagulant therapy. We aimed to externally validate and compare the prognostic performance of 7 clinical prediction scores for major bleeding events during oral anticoagulation therapy. METHODS: We followed 515 adult patients taking oral anticoagulants to measure the first major bleeding event over a 12-month follow-up period. The performance of each score to predict the risk of major bleeding and the physician's subjective assessment of bleeding risk were compared with the C statistic. RESULTS: The cumulative incidence of a first major bleeding event during follow-up was 6.8% (35/515). According to the 7 scoring systems, the proportions of major bleeding ranged from 3.0% to 5.7% for low-risk, 6.7% to 9.9% for intermediate-risk, and 7.4% to 15.4% for high-risk patients. The overall predictive accuracy of the scores was poor, with the C statistic ranging from 0.54 to 0.61 and not significantly different from each other (P=.84). Only the Anticoagulation and Risk Factors in Atrial Fibrillation score performed slightly better than would be expected by chance (C statistic, 0.61; 95% confidence interval, 0.52-0.70). The performance of the scores was not statistically better than physicians' subjective risk assessments (C statistic, 0.55; P=.94). CONCLUSION: The performance of 7 clinical scoring systems in predicting major bleeding events in patients receiving oral anticoagulation therapy was poor and not better than physicians' subjective assessments.
Resumo:
BACKGROUND AND AIMS: Inflammatory bowel disease (IBD) frequently manifests during childhood and adolescence. For providing and understanding a comprehensive picture of a patients' health status, health-related quality of life (HRQoL) instruments are an essential complement to clinical symptoms and functional limitations. Currently, the IMPACT-III questionnaire is one of the most frequently used disease-specific HRQoL instrument among patients with IBD. However, there is a lack of studies examining the validation and reliability of this instrument. METHODS: 146 paediatric IBD patients from the multicenter Swiss IBD paediatric cohort study database were included in the study. Medical and laboratory data were extracted from the hospital records. HRQoL data were assessed by means of standardized questionnaires filled out by the patients in a face-to-face interview. RESULTS: The original six IMPACT-III domain scales could not be replicated in the current sample. A principal component analysis with the extraction of four factor scores revealed the most robust solution. The four factors indicated good internal reliability (Cronbach's alpha=.64-.86), good concurrent validity measured by correlations with the generic KIDSCREEN-27 scales and excellent discriminant validity for the dimension of physical functioning measured by HRQoL differences for active and inactive severity groups (p<.001, d=1.04). CONCLUSIONS: This study with Swiss children with IBD indicates good validity and reliability for the IMPACT-III questionnaire. However, our findings suggest a slightly different factor structure than originally proposed. The IMPACT-III questionnaire can be recommended for its use in clinical practice. The factor structure should be further examined in other samples.
Resumo:
BACKGROUND: Adequate pain assessment is critical for evaluating the efficacy of analgesic treatment in clinical practice and during the development of new therapies. Yet the currently used scores of global pain intensity fail to reflect the diversity of pain manifestations and the complexity of underlying biological mechanisms. We have developed a tool for a standardized assessment of pain-related symptoms and signs that differentiates pain phenotypes independent of etiology. METHODS AND FINDINGS: Using a structured interview (16 questions) and a standardized bedside examination (23 tests), we prospectively assessed symptoms and signs in 130 patients with peripheral neuropathic pain caused by diabetic polyneuropathy, postherpetic neuralgia, or radicular low back pain (LBP), and in 57 patients with non-neuropathic (axial) LBP. A hierarchical cluster analysis revealed distinct association patterns of symptoms and signs (pain subtypes) that characterized six subgroups of patients with neuropathic pain and two subgroups of patients with non-neuropathic pain. Using a classification tree analysis, we identified the most discriminatory assessment items for the identification of pain subtypes. We combined these six interview questions and ten physical tests in a pain assessment tool that we named Standardized Evaluation of Pain (StEP). We validated StEP for the distinction between radicular and axial LBP in an independent group of 137 patients. StEP identified patients with radicular pain with high sensitivity (92%; 95% confidence interval [CI] 83%-97%) and specificity (97%; 95% CI 89%-100%). The diagnostic accuracy of StEP exceeded that of a dedicated screening tool for neuropathic pain and spinal magnetic resonance imaging. In addition, we were able to reproduce subtypes of radicular and axial LBP, underscoring the utility of StEP for discerning distinct constellations of symptoms and signs. CONCLUSIONS: We present a novel method of identifying pain subtypes that we believe reflect underlying pain mechanisms. We demonstrate that this new approach to pain assessment helps separate radicular from axial back pain. Beyond diagnostic utility, a standardized differentiation of pain subtypes that is independent of disease etiology may offer a unique opportunity to improve targeted analgesic treatment.
Resumo:
Background: It has been previously shown with English speaking children that food allergy clearly affects their quality of life. The first allergy quality of life questionnaire has been validated in English in 2008, however to date no questionnaire was available in French. Objectives: To validate the French version of the Food Allergy Quality of Life Questionnaire- Parent Form (FAQLQ-PF) already existing version developed and validated in English by DunnGalvin et al. Methods: The questionnaire was translated from English to French by two independent French-speaking translators and retranslated by an independent English-speaking translator. We then recruited 30 patients between 0 and 12 years with a food allergy. Parents of these children answered the questionnaire during a clinic visit. The results obtained were then analysed and compared with the results provided by DunnGalvin's study and the Food Allergy independent Measure (FAIM). Results: 27 questionnaires were fully completed and available for analysis. Median age was 6 years with a range from 18 months to 12 years. We had a girl/boy ratio of 1:1.14. A Cronbach's a correlation index of 0.748 was found. Validity was demonstrated by significant correlations between FAQLQ-PF and the FAIM. Conclusion: The French version of the FAQLQ was validated and will permit to assess degree of Quality of Life for French-speaking children with food allergy. It will be an important tool for clinical research and will allow research collaboration between French and English speaking research teams.
Resumo:
ABSTRACT: BACKGROUND: Chest pain raises concern for the possibility of coronary heart disease. Scoring methods have been developed to identify coronary heart disease in emergency settings, but not in primary care. METHODS: Data were collected from a multicenter Swiss clinical cohort study including 672 consecutive patients with chest pain, who had visited one of 59 family practitioners' offices. Using delayed diagnosis we derived a prediction rule to rule out coronary heart disease by means of a logistic regression model. Known cardiovascular risk factors, pain characteristics, and physical signs associated with coronary heart disease were explored to develop a clinical score. Patients diagnosed with angina or acute myocardial infarction within the year following their initial visit comprised the coronary heart disease group. RESULTS: The coronary heart disease score was derived from eight variables: age, gender, duration of chest pain from 1 to 60 minutes, substernal chest pain location, pain increases with exertion, absence of tenderness point at palpation, cardiovascular risks factors, and personal history of cardiovascular disease. Area under the receiver operating characteristics curve was of 0.95 with a 95% confidence interval of 0.92; 0.97. From this score, 413 patients were considered as low risk for values of percentile 5 of the coronary heart disease patients. Internal validity was confirmed by bootstrapping. External validation using data from a German cohort (Marburg, n = 774) revealed a receiver operating characteristics curve of 0.75 (95% confidence interval, 0.72; 0.81) with a sensitivity of 85.6% and a specificity of 47.2%. CONCLUSIONS: This score, based only on history and physical examination, is a complementary tool for ruling out coronary heart disease in primary care patients complaining of chest pain.