67 resultados para Clinical validation
Resumo:
OBJECTIVE To validate use of stress MRI for evaluation of stifle joints of dogs with an intact or deficient cranial cruciate ligament (CrCL). SAMPLE 10 cadaveric stifle joints from 10 dogs. PROCEDURES A custom-made limb-holding device and a pulley system linked to a paw plate were used to apply axial compression across the stifle joint and induce cranial tibial translation with the joint in various degrees of flexion. By use of sagittal proton density-weighted MRI, CrCL-intact and deficient stifle joints were evaluated under conditions of loading stress simulating the tibial compression test or the cranial drawer test. Medial and lateral femorotibial subluxation following CrCL transection measured under a simulated tibial compression test and a cranial drawer test were compared. RESULTS By use of tibial compression test MRI, the mean ± SD cranial tibial translations in the medial and lateral compartments were 9.6 ± 3.7 mm and 10 ± 4.1 mm, respectively. By use of cranial drawer test MRI, the mean ± SD cranial tibial translations in the medial and lateral compartments were 8.3 ± 3.3 mm and 9.5 ± 3.5 mm, respectively. No significant difference in femorotibial subluxation was found between stress MRI techniques. Femorotibial subluxation elicited by use of the cranial drawer test was greater in the lateral than in the medial compartment. CONCLUSIONS AND CLINICAL RELEVANCE Both stress techniques induced stifle joint subluxation following CrCL transection that was measurable by use of MRI, suggesting that both methods may be further evaluated for clinical use.
Resumo:
BACKGROUND AND PURPOSE The DRAGON score predicts functional outcome in the hyperacute phase of intravenous thrombolysis treatment of ischemic stroke patients. We aimed to validate the score in a large multicenter cohort in anterior and posterior circulation. METHODS Prospectively collected data of consecutive ischemic stroke patients who received intravenous thrombolysis in 12 stroke centers were merged (n=5471). We excluded patients lacking data necessary to calculate the score and patients with missing 3-month modified Rankin scale scores. The final cohort comprised 4519 eligible patients. We assessed the performance of the DRAGON score with area under the receiver operating characteristic curve in the whole cohort for both good (modified Rankin scale score, 0-2) and miserable (modified Rankin scale score, 5-6) outcomes. RESULTS Area under the receiver operating characteristic curve was 0.84 (0.82-0.85) for miserable outcome and 0.82 (0.80-0.83) for good outcome. Proportions of patients with good outcome were 96%, 93%, 78%, and 0% for 0 to 1, 2, 3, and 8 to 10 score points, respectively. Proportions of patients with miserable outcome were 0%, 2%, 4%, 89%, and 97% for 0 to 1, 2, 3, 8, and 9 to 10 points, respectively. When tested separately for anterior and posterior circulation, there was no difference in performance (P=0.55); areas under the receiver operating characteristic curve were 0.84 (0.83-0.86) and 0.82 (0.78-0.87), respectively. No sex-related difference in performance was observed (P=0.25). CONCLUSIONS The DRAGON score showed very good performance in the large merged cohort in both anterior and posterior circulation strokes. The DRAGON score provides rapid estimation of patient prognosis and supports clinical decision-making in the hyperacute phase of stroke care (eg, when invasive add-on strategies are considered).
Resumo:
In patients diagnosed with pharmaco-resistant epilepsy, cerebral areas responsible for seizure generation can be defined by performing implantation of intracranial electrodes. The identification of the epileptogenic zone (EZ) is based on visual inspection of the intracranial electroencephalogram (IEEG) performed by highly qualified neurophysiologists. New computer-based quantitative EEG analyses have been developed in collaboration with the signal analysis community to expedite EZ detection. The aim of the present report is to compare different signal analysis approaches developed in four different European laboratories working in close collaboration with four European Epilepsy Centers. Computer-based signal analysis methods were retrospectively applied to IEEG recordings performed in four patients undergoing pre-surgical exploration of pharmaco-resistant epilepsy. The four methods elaborated by the different teams to identify the EZ are based either on frequency analysis, on nonlinear signal analysis, on connectivity measures or on statistical parametric mapping of epileptogenicity indices. All methods converge on the identification of EZ in patients that present with fast activity at seizure onset. When traditional visual inspection was not successful in detecting EZ on IEEG, the different signal analysis methods produced highly discordant results. Quantitative analysis of IEEG recordings complement clinical evaluation by contributing to the study of epileptogenic networks during seizures. We demonstrate that the degree of sensitivity of different computer-based methods to detect the EZ in respect to visual EEG inspection depends on the specific seizure pattern.
Resumo:
BACKGROUND & Aims: Standardized instruments are needed to assess the activity of eosinophilic esophagitis (EoE), to provide endpoints for clinical trials and observational studies. We aimed to develop and validate a patient-reported outcome (PRO) instrument and score, based on items that could account for variations in patients' assessments of disease severity. We also evaluated relationships between patients' assessment of disease severity and EoE-associated endoscopic, histologic, and laboratory findings. METHODS We collected information from 186 patients with EoE in Switzerland and the US (69.4% male; median age, 43 years) via surveys (n = 135), focus groups (n = 27), and semi-structured interviews (n = 24). Items were generated for the instruments to assess biologic activity based on physician input. Linear regression was used to quantify the extent to which variations in patient-reported disease characteristics could account for variations in patients' assessment of EoE severity. The PRO instrument was prospectively used in 153 adult patients with EoE (72.5% male; median age, 38 years), and validated in an independent group of 120 patients with EoE (60.8% male; median age, 40.5 years). RESULTS Seven PRO factors that are used to assess characteristics of dysphagia, behavioral adaptations to living with dysphagia, and pain while swallowing accounted for 67% of the variation in patients' assessment of disease severity. Based on statistical consideration and patient input, a 7-day recall period was selected. Highly active EoE, based on endoscopic and histologic findings, was associated with an increase in patient-assessed disease severity. In the validation study, the mean difference between patient assessment of EoE severity and PRO score was 0.13 (on a scale from 0 to 10). CONCLUSIONS We developed and validated an EoE scoring system based on 7 PRO items that assesses symptoms over a 7-day recall period. Clinicaltrials.gov number: NCT00939263.
Resumo:
BACKGROUND Recently, two simple clinical scores were published to predict survival in trauma patients. Both scores may successfully guide major trauma triage, but neither has been independently validated in a hospital setting. METHODS This is a cohort study with 30-day mortality as the primary outcome to validate two new trauma scores-Mechanism, Glasgow Coma Scale (GCS), Age, and Pressure (MGAP) score and GCS, Age and Pressure (GAP) score-using data from the UK Trauma Audit and Research Network. First, an assessment of discrimination, using the area under the receiver operating characteristic (ROC) curve, and calibration, comparing mortality rates with those originally published, were performed. Second, we calculated sensitivity, specificity, predictive values, and likelihood ratios for prognostic score performance. Third, we propose new cutoffs for the risk categories. RESULTS A total of 79,807 adult (≥16 years) major trauma patients (2000-2010) were included; 5,474 (6.9%) died. Mean (SD) age was 51.5 (22.4) years, median GCS score was 15 (interquartile range, 15-15), and median Injury Severity Score (ISS) was 9 (interquartile range, 9-16). More than 50% of the patients had a low-risk GAP or MGAP score (1% mortality). With regard to discrimination, areas under the ROC curve were 87.2% for GAP score (95% confidence interval, 86.7-87.7) and 86.8% for MGAP score (95% confidence interval, 86.2-87.3). With regard to calibration, 2,390 (3.3%), 1,900 (28.5%), and 1,184 (72.2%) patients died in the low, medium, and high GAP risk categories, respectively. In the low- and medium-risk groups, these were almost double the previously published rates. For MGAP, 1,861 (2.8%), 1,455 (15.2%), and 2,158 (58.6%) patients died in the low-, medium-, and high-risk categories, consonant with results originally published. Reclassifying score point cutoffs improved likelihood ratios, sensitivity and specificity, as well as areas under the ROC curve. CONCLUSION We found both scores to be valid triage tools to stratify emergency department patients, according to their risk of death. MGAP calibrated better, but GAP slightly improved discrimination. The newly proposed cutoffs better differentiate risk classification and may therefore facilitate hospital resource allocation. LEVEL OF EVIDENCE Prognostic study, level II.
Resumo:
OBJECTIVES This study aimed to update the Logistic Clinical SYNTAX score to predict 3-year survival after percutaneous coronary intervention (PCI) and compare the performance with the SYNTAX score alone. BACKGROUND The SYNTAX score is a well-established angiographic tool to predict long-term outcomes after PCI. The Logistic Clinical SYNTAX score, developed by combining clinical variables with the anatomic SYNTAX score, has been shown to perform better than the SYNTAX score alone in predicting 1-year outcomes after PCI. However, the ability of this score to predict long-term survival is unknown. METHODS Patient-level data (N = 6,304, 399 deaths within 3 years) from 7 contemporary PCI trials were analyzed. We revised the overall risk and the predictor effects in the core model (SYNTAX score, age, creatinine clearance, and left ventricular ejection fraction) using Cox regression analysis to predict mortality at 3 years. We also updated the extended model by combining the core model with additional independent predictors of 3-year mortality (i.e., diabetes mellitus, peripheral vascular disease, and body mass index). RESULTS The revised Logistic Clinical SYNTAX models showed better discriminative ability than the anatomic SYNTAX score for the prediction of 3-year mortality after PCI (c-index: SYNTAX score, 0.61; core model, 0.71; and extended model, 0.73 in a cross-validation procedure). The extended model in particular performed better in differentiating low- and intermediate-risk groups. CONCLUSIONS Risk scores combining clinical characteristics with the anatomic SYNTAX score substantially better predict 3-year mortality than the SYNTAX score alone and should be used for long-term risk stratification of patients undergoing PCI.
Resumo:
BACKGROUND Retinal optical coherence tomography (OCT) permits quantification of retinal layer atrophy relevant to assessment of neurodegeneration in multiple sclerosis (MS). Measurement artefacts may limit the use of OCT to MS research. OBJECTIVE An expert task force convened with the aim to provide guidance on the use of validated quality control (QC) criteria for the use of OCT in MS research and clinical trials. METHODS A prospective multi-centre (n = 13) study. Peripapillary ring scan QC rating of an OCT training set (n = 50) was followed by a test set (n = 50). Inter-rater agreement was calculated using kappa statistics. Results were discussed at a round table after the assessment had taken place. RESULTS The inter-rater QC agreement was substantial (kappa = 0.7). Disagreement was found highest for judging signal strength (kappa = 0.40). Future steps to resolve these issues were discussed. CONCLUSION Substantial agreement for QC assessment was achieved with aid of the OSCAR-IB criteria. The task force has developed a website for free online training and QC certification. The criteria may prove useful for future research and trials in MS using OCT as a secondary outcome measure in a multi-centre setting.
Resumo:
Introduction To meet the quality standards for high-stakes OSCEs, it is necessary to ensure high quality standardized performance of the SPs involved.[1] One of the ways this can be assured is through the assessment of the quality of SPs` performance in training and during the assessment. There is some literature concerning validated instruments that have been used to assess SP performance in formative contexts but very little related to high stakes contexts.[2], [3], [4]. Content and structure During this workshop different approaches to quality control for SPs` performance, developed in medicine, pharmacy and nursing OSCEs, will be introduced. Participants will have the opportunity to use these approaches in simulated interactions. Advantages and disadvantages of these approaches will be discussed. Anticipated outcomes By the end of this session, participants will be able to discuss the rationale for quality control of SPs` performance in high stakes OSCEs, outline key factors in creating strategies for quality control, identify various strategies for assuring quality control, and reflect on applications to their own practice. Who should attend The workshop is designed for those interested in quality assurance of SP performance in high stakes OSCEs. Level All levels are welcome. References Adamo G. 2003. Simulated and standardized patients in OSCEs: achievements and challenges:1992-2003. Med Teach. 25(3), 262- 270. Wind LA, Van Dalen J, Muijtjens AM, Rethans JJ. Assessing simulated patients in an educational setting: the MaSP (Maastricht Assessment of Simulated Patients). Med Educ 2004, 38(1):39-44. Bouter S, van Weel-Baumgarten E, Bolhuis S. Construction and validation of the Nijmegen Evaluation of the Simulated Patient (NESP): Assessing Simulated Patients' ability to role-play and provide feedback to students. Acad Med: Journal of the Association of American Medical Colleges 2012. May W, Fisher D, Souder D: Development of an instrument to measure the quality of standardized/simulated patient verbal feedback. Med Educ 2012, 2(1).
Resumo:
BACKGROUND AND AIMS Inflammatory bowel disease (IBD) frequently manifests during childhood and adolescence. For providing and understanding a comprehensive picture of a patients' health status, health-related quality of life (HRQoL) instruments are an essential complement to clinical symptoms and functional limitations. Currently, the IMPACT-III questionnaire is one of the most frequently used disease-specific HRQoL instrument among patients with IBD. However, there is a lack of studies examining the validation and reliability of this instrument. METHODS 146 paediatric IBD patients from the multicenter Swiss IBD paediatric cohort study database were included in the study. Medical and laboratory data were extracted from the hospital records. HRQoL data were assessed by means of standardized questionnaires filled out by the patients in a face-to-face interview. RESULTS The original six IMPACT-III domain scales could not be replicated in the current sample. A principal component analysis with the extraction of four factor scores revealed the most robust solution. The four factors indicated good internal reliability (Cronbach's alpha=.64-.86), good concurrent validity measured by correlations with the generic KIDSCREEN-27 scales and excellent discriminant validity for the dimension of physical functioning measured by HRQoL differences for active and inactive severity groups (p<.001, d=1.04). CONCLUSIONS This study with Swiss children with IBD indicates good validity and reliability for the IMPACT-III questionnaire. However, our findings suggest a slightly different factor structure than originally proposed. The IMPACT-III questionnaire can be recommended for its use in clinical practice. The factor structure should be further examined in other samples.
Resumo:
BACKGROUND CONTEXT The nerve root sedimentation sign in transverse magnetic resonance imaging has been shown to discriminate well between selected patients with and without lumbar spinal stenosis (LSS), but the performance of this new test, when used in a broader patient population, is not yet known. PURPOSE To evaluate the clinical performance of the nerve root sedimentation sign in detecting central LSS above L5 and to determine its potential significance for treatment decisions. STUDY DESIGN Retrospective cohort study. PATIENT SAMPLE One hundred eighteen consecutive patients with suspected LSS (52% women, median age 62 years) with a median follow-up of 24 months. OUTCOME MEASURES Oswestry disability index (ODI) and back and leg pain relief. METHODS We performed a clinical test validation study to assess the clinical performance of the sign by measuring its association with health outcomes. Subjects were patients referred to our orthopedic spine unit from 2004 to 2007 before the sign had been described. Based on clinical and radiological diagnostics, patients had been treated with decompression surgery or nonsurgical treatment. Changes in the ODI and pain from baseline to 24-month follow-up were compared between sedimentation sign positives and negatives in both treatment groups. RESULTS Sixty-nine patients underwent surgery. Average baseline ODI in the surgical group was 54.7%, and the sign was positive in 39 patients (mean ODI improvement 29.0 points) and negative in 30 (ODI improvement 28.4), with no statistically significant difference in ODI and pain improvement between groups. In the 49 patients of the nonsurgical group, mean baseline ODI was 42.4%; the sign was positive in 18 (ODI improvement 0.6) and negative in 31 (ODI improvement 17.7). A positive sign was associated with a smaller ODI and back pain improvement than negative signs (both p<.01 on t test). CONCLUSIONS In patients commonly treated with decompression surgery, the sedimentation sign does not appear to predict surgical outcome. In nonsurgically treated patients, a positive sign is associated with more limited improvement. In these cases, surgery might be effective, but this needs investigation in prospective randomized trials (Australian New Zealand Clinical Trial Registry, number ACTRN12610000567022).
Resumo:
OBJECTIVE Reliable tools to predict long-term outcome among patients with well compensated advanced liver disease due to chronic HCV infection are lacking. DESIGN Risk scores for mortality and for cirrhosis-related complications were constructed with Cox regression analysis in a derivation cohort and evaluated in a validation cohort, both including patients with chronic HCV infection and advanced fibrosis. RESULTS In the derivation cohort, 100/405 patients died during a median 8.1 (IQR 5.7-11.1) years of follow-up. Multivariate Cox analyses showed age (HR=1.06, 95% CI 1.04 to 1.09, p<0.001), male sex (HR=1.91, 95% CI 1.10 to 3.29, p=0.021), platelet count (HR=0.91, 95% CI 0.87 to 0.95, p<0.001) and log10 aspartate aminotransferase/alanine aminotransferase ratio (HR=1.30, 95% CI 1.12 to 1.51, p=0.001) were independently associated with mortality (C statistic=0.78, 95% CI 0.72 to 0.83). In the validation cohort, 58/296 patients with cirrhosis died during a median of 6.6 (IQR 4.4-9.0) years. Among patients with estimated 5-year mortality risks <5%, 5-10% and >10%, the observed 5-year mortality rates in the derivation cohort and validation cohort were 0.9% (95% CI 0.0 to 2.7) and 2.6% (95% CI 0.0 to 6.1), 8.1% (95% CI 1.8 to 14.4) and 8.0% (95% CI 1.3 to 14.7), 21.8% (95% CI 13.2 to 30.4) and 20.9% (95% CI 13.6 to 28.1), respectively (C statistic in validation cohort = 0.76, 95% CI 0.69 to 0.83). The risk score for cirrhosis-related complications also incorporated HCV genotype (C statistic = 0.80, 95% CI 0.76 to 0.83 in the derivation cohort; and 0.74, 95% CI 0.68 to 0.79 in the validation cohort). CONCLUSIONS Prognosis of patients with chronic HCV infection and compensated advanced liver disease can be accurately assessed with risk scores including readily available objective clinical parameters.
Resumo:
Background Complete-pelvis segmentation in antero-posterior pelvic radiographs is required to create a patient-specific three-dimensional pelvis model for surgical planning and postoperative assessment in image-free navigation of total hip arthroplasty. Methods A fast and robust framework for accurately segmenting the complete pelvis is presented, consisting of two consecutive modules. In the first module, a three-stage method was developed to delineate the left hemipelvis based on statistical appearance and shape models. To handle complex pelvic structures, anatomy-specific information processing techniques were employed. As the input to the second module, the delineated left hemi-pelvis was then reflected about an estimated symmetry line of the radiograph to initialize the right hemi-pelvis segmentation. The right hemi-pelvis was segmented by the same three-stage method, Results Two experiments conducted on respectively 143 and 40 AP radiographs demonstrated a mean segmentation accuracy of 1.61±0.68 mm. A clinical study to investigate the postoperative assessment of acetabular cup orientations based on the proposed framework revealed an average accuracy of 1.2°±0.9° and 1.6°±1.4° for anteversion and inclination, respectively. Delineation of each radiograph costs less than one minute. Conclusions Despite further validation needed, the preliminary results implied the underlying clinical applicability of the proposed framework for image-free THA.
Resumo:
BACKGROUND The Pulmonary Embolism Quality of Life questionnaire (PEmb-QoL) is a 40-item questionnaire to measure health-related quality of life in patients with pulmonary embolism. It covers six 6 dimensions: frequency of complaints, limitations in activities of daily living, work-related problems, social limitations, intensity of complaints, and emotional complaints. Originally developed in Dutch and English, we prospectively validated a German version of the PEmb-QoL. METHODS A forward-backward translation of the English version of the PEmb-QoL into German was performed. German-speaking consecutive adult patients aged ≥18 years with an acute, objectively confirmed pulmonary embolism discharged from a Swiss university hospital (01/2011-06/2013) were recruited telephonically. Established psychometric tests and criteria were used to evaluate the acceptability, reliability, and validity of the German PEmb-QoL questionnaire. To assess the underlying dimensions, an exploratory factor analysis was performed. RESULTS Overall, 102 patients were enrolled in the study. The German version of the PEmb-QoL showed a good internal consistency (Cronbach's alpha ranging from 0.72 to 0.96), item-total (0.53-0.95) and inter-item correlations (>0.4), and test-retest reliability (intra-class correlation coefficients 0.59-0.89) for the dimension scores. A moderate correlation of the PEmb-QoL with SF-36 dimension and summary scores (0.21-0.83) indicated convergent validity, while low correlations of PEmb-QoL dimensions with clinical characteristics (-0.16-0.37) supported discriminant validity. The exploratory factor analysis suggested four underlying dimensions: limitations in daily activities, symptoms, work-related problems, and emotional complaints. CONCLUSION The German version of the PEmb-QoL questionnaire is a valid and reliable disease-specific measure for quality of life in patients with pulmonary embolism.
Resumo:
BACKGROUND Prostate cancer (PCa) is a very heterogeneous disease with respect to clinical outcome. This study explored differential DNA methylation in a priori selected genes to diagnose PCa and predict clinical failure (CF) in high-risk patients. METHODS A quantitative multiplex, methylation-specific PCR assay was developed to assess promoter methylation of the APC, CCND2, GSTP1, PTGS2 and RARB genes in formalin-fixed, paraffin-embedded tissue samples from 42 patients with benign prostatic hyperplasia and radical prostatectomy specimens of patients with high-risk PCa, encompassing training and validation cohorts of 147 and 71 patients, respectively. Log-rank tests, univariate and multivariate Cox models were used to investigate the prognostic value of the DNA methylation. RESULTS Hypermethylation of APC, CCND2, GSTP1, PTGS2 and RARB was highly cancer-specific. However, only GSTP1 methylation was significantly associated with CF in both independent high-risk PCa cohorts. Importantly, trichotomization into low, moderate and high GSTP1 methylation level subgroups was highly predictive for CF. Patients with either a low or high GSTP1 methylation level, as compared to the moderate methylation groups, were at a higher risk for CF in both the training (Hazard ratio [HR], 3.65; 95% CI, 1.65 to 8.07) and validation sets (HR, 4.27; 95% CI, 1.03 to 17.72) as well as in the combined cohort (HR, 2.74; 95% CI, 1.42 to 5.27) in multivariate analysis. CONCLUSIONS Classification of primary high-risk tumors into three subtypes based on DNA methylation can be combined with clinico-pathological parameters for a more informative risk-stratification of these PCa patients.
Resumo:
Background: Access to hepatitis B viral load (VL) testing is poor in sub-Saharan Africa (SSA) due toeconomic and logistical reasons.Objectives: To demonstrate the feasibility of testing dried blood spots (DBS) for hepatitis B virus (HBV)VL in a laboratory in Lusaka, Zambia, and to compare HBV VLs between DBS and plasma samples.Study design: Paired plasma and DBS samples from HIV-HBV co-infected Zambian adults were analyzedfor HBV VL using the COBAS AmpliPrep/COBAS TaqMan HBV test (Version 2.0) and for HBV genotypeby direct sequencing. We used Bland-Altman analysis to compare VLs between sample types and bygenotype. Logistic regression analysis was conducted to assess the probability of an undetectable DBSresult by plasma VL.Results: Among 68 participants, median age was 34 years, 61.8% were men, and median plasma HBV VLwas 3.98 log IU/ml (interquartile range, 2.04–5.95). Among sequenced viruses, 28 were genotype A1 and27 were genotype E. Bland–Altman plots suggested strong agreement between DBS and plasma VLs. DBSVLs were on average 1.59 log IU/ml lower than plasma with 95% limits of agreement of −2.40 to −0.83 logIU/ml. At a plasma VL ≥2,000 IU/ml, the probability of an undetectable DBS result was 1.8% (95% CI:0.5–6.6). At plasma VL ≥20,000 IU/ml this probability reduced to 0.2% (95% CI: 0.03–1.7).