925 resultados para Clinical validation
Resumo:
Background The identification of additional prognostic markers to improve risk stratification and to avoid overtreatment is one of the most urgent clinical needs in prostate cancer (PCa). MicroRNAs, being important regulators of gene expression, are promising biomarkers in various cancer entities, though the impact as prognostic predictors in PCa is poorly understood. The aim of this study was to identify specific miRNAs as potential prognostic markers in high-risk PCa and to validate their clinical impact. Methodology and Principal Findings We performed miRNA-microarray analysis in a high-risk PCa study group selected by their clinical outcome (clinical progression free survival (CPFS) vs. clinical failure (CF)). We identified seven candidate miRNAs (let-7a/b/c, miR-515-3p/5p, -181b, -146b, and -361) that showed differential expression between both groups. Further qRT-PCR analysis revealed down-regulation of members of the let-7 family in the majority of a large, well-characterized high-risk PCa cohort (n = 98). Expression of let-7a/b/and -c was correlated to clinical outcome parameters of this group. While let-7a showed no association or correlation with clinical relevant data, let-7b and let-7c were associated with CF in PCa patients and functioned partially as independent prognostic marker. Validation of the data using an independent high-risk study cohort revealed that let-7b, but not let-7c, has impact as an independent prognostic marker for BCR and CF. Furthermore, we identified HMGA1, a non-histone protein, as a new target of let-7b and found correlation of let-7b down-regulation with HMGA1 over-expression in primary PCa samples. Conclusion Our findings define a distinct miRNA expression profile in PCa cases with early CF and identified let-7b as prognostic biomarker in high-risk PCa. This study highlights the importance of let-7b as tumor suppressor miRNA in high-risk PCa and presents a basis to improve individual therapy for high-risk PCa patients.
Resumo:
IMPORTANCE Because effective interventions to reduce hospital readmissions are often expensive to implement, a score to predict potentially avoidable readmissions may help target the patients most likely to benefit. OBJECTIVE To derive and internally validate a prediction model for potentially avoidable 30-day hospital readmissions in medical patients using administrative and clinical data readily available prior to discharge. DESIGN Retrospective cohort study. SETTING Academic medical center in Boston, Massachusetts. PARTICIPANTS All patient discharges from any medical services between July 1, 2009, and June 30, 2010. MAIN OUTCOME MEASURES Potentially avoidable 30-day readmissions to 3 hospitals of the Partners HealthCare network were identified using a validated computerized algorithm based on administrative data (SQLape). A simple score was developed using multivariable logistic regression, with two-thirds of the sample randomly selected as the derivation cohort and one-third as the validation cohort. RESULTS Among 10 731 eligible discharges, 2398 discharges (22.3%) were followed by a 30-day readmission, of which 879 (8.5% of all discharges) were identified as potentially avoidable. The prediction score identified 7 independent factors, referred to as the HOSPITAL score: h emoglobin at discharge, discharge from an o ncology service, s odium level at discharge, p rocedure during the index admission, i ndex t ype of admission, number of a dmissions during the last 12 months, and l ength of stay. In the validation set, 26.7% of the patients were classified as high risk, with an estimated potentially avoidable readmission risk of 18.0% (observed, 18.2%). The HOSPITAL score had fair discriminatory power (C statistic, 0.71) and had good calibration. CONCLUSIONS AND RELEVANCE This simple prediction model identifies before discharge the risk of potentially avoidable 30-day readmission in medical patients. This score has potential to easily identify patients who may need more intensive transitional care interventions.
Resumo:
Chemotherapeutic drugs kill cancer cells, but it is unclear why this happens in responding patients but not in non-responders. Proteomic profiles of patients with oesophageal adenocarcinoma may be helpful in predicting response and selecting more effective treatment strategies. In this study, pretherapeutic oesophageal adenocarcinoma biopsies were analysed for proteomic changes associated with response to chemotherapy by MALDI imaging mass spectrometry. Resulting candidate proteins were identified by liquid chromatography-tandem mass spectrometry (LC-MS/MS) and investigated for functional relevance in vitro. Clinical impact was validated in pretherapeutic biopsies from an independent patient cohort. Studies on the incidence of these defects in other solid tumours were included. We discovered that clinical response to cisplatin correlated with pre-existing defects in the mitochondrial respiratory chain complexes of cancer cells, caused by loss of specific cytochrome c oxidase (COX) subunits. Knockdown of a COX protein altered chemosensitivity in vitro, increasing the propensity of cancer cells to undergo cell death following cisplatin treatment. In an independent validation, patients with reduced COX protein expression prior to treatment exhibited favourable clinical outcomes to chemotherapy, whereas tumours with unchanged COX expression were chemoresistant. In conclusion, previously undiscovered pre-existing defects in mitochondrial respiratory complexes cause cancer cells to become chemosensitive: mitochondrial defects lower the cells' threshold for undergoing cell death in response to cisplatin. By contrast, cancer cells with intact mitochondrial respiratory complexes are chemoresistant and have a high threshold for cisplatin-induced cell death. This connection between mitochondrial respiration and chemosensitivity is relevant to anticancer therapeutics that target the mitochondrial electron transport chain.
Resumo:
BACKGROUND Programmed cell death 1 (PD-1) receptor triggering by PD ligand 1 (PD-L1) inhibits T cell activation. PD-L1 expression was detected in different malignancies and associated with poor prognosis. Therapeutic antibodies inhibiting PD-1/PD-L1 interaction have been developed. MATERIALS AND METHODS A tissue microarray (n=1491) including healthy colon mucosa and clinically annotated colorectal cancer (CRC) specimens was stained with two PD-L1 specific antibody preparations. Surgically excised CRC specimens were enzymatically digested and analysed for cluster of differentiation 8 (CD8) and PD-1 expression. RESULTS Strong PD-L1 expression was observed in 37% of mismatch repair (MMR)-proficient and in 29% of MMR-deficient CRC. In MMR-proficient CRC strong PD-L1 expression correlated with infiltration by CD8(+) lymphocytes (P=0.0001) which did not express PD-1. In univariate analysis, strong PD-L1 expression in MMR-proficient CRC was significantly associated with early T stage, absence of lymph node metastases, lower tumour grade, absence of vascular invasion and significantly improved survival in training (P=0.0001) and validation (P=0.03) sets. A similar trend (P=0.052) was also detectable in multivariate analysis including age, sex, T stage, N stage, tumour grade, vascular invasion, invasive margin and MMR status. Interestingly, programmed death receptor ligand 1 (PDL-1) and interferon (IFN)-γ gene expression, as detected by quantitative reverse transcriptase polymerase chain reaction (RT-PCR) in fresh frozen CRC specimens (n=42) were found to be significantly associated (r=0.33, P=0.03). CONCLUSION PD-L1 expression is paradoxically associated with improved survival in MMR-proficient CRC.
Resumo:
Objective: Impaired cognition is an important dimension in psychosis and its at-risk states. Research on the value of impaired cognition for psychosis prediction in at-risk samples, however, mainly relies on study-specific sample means of neurocognitive tests, which unlike widely available general test norms are difficult to translate into clinical practice. The aim of this study was to explore the combined predictive value of at-risk criteria and neurocognitive deficits according to test norms with a risk stratification approach. Method: Potential predictors of psychosis (neurocognitive deficits and at-risk criteria) over 24 months were investigated in 97 at-risk patients. Results: The final prediction model included (1) at-risk criteria (attenuated psychotic symptoms plus subjective cognitive disturbances) and (2) a processing speed deficit (digit symbol test). The model was stratified into 4 risk classes with hazard rates between 0.0 (both predictors absent) and 1.29 (both predictors present). Conclusions: The combination of a processing speed deficit and at-risk criteria provides an optimized stratified risk assessment. Based on neurocognitive test norms, the validity of our proposed 3 risk classes could easily be examined in independent at-risk samples and, pending positive validation results, our approach could easily be applied in clinical practice in the future.
Resumo:
OBJECTIVE To validate use of stress MRI for evaluation of stifle joints of dogs with an intact or deficient cranial cruciate ligament (CrCL). SAMPLE 10 cadaveric stifle joints from 10 dogs. PROCEDURES A custom-made limb-holding device and a pulley system linked to a paw plate were used to apply axial compression across the stifle joint and induce cranial tibial translation with the joint in various degrees of flexion. By use of sagittal proton density-weighted MRI, CrCL-intact and deficient stifle joints were evaluated under conditions of loading stress simulating the tibial compression test or the cranial drawer test. Medial and lateral femorotibial subluxation following CrCL transection measured under a simulated tibial compression test and a cranial drawer test were compared. RESULTS By use of tibial compression test MRI, the mean ± SD cranial tibial translations in the medial and lateral compartments were 9.6 ± 3.7 mm and 10 ± 4.1 mm, respectively. By use of cranial drawer test MRI, the mean ± SD cranial tibial translations in the medial and lateral compartments were 8.3 ± 3.3 mm and 9.5 ± 3.5 mm, respectively. No significant difference in femorotibial subluxation was found between stress MRI techniques. Femorotibial subluxation elicited by use of the cranial drawer test was greater in the lateral than in the medial compartment. CONCLUSIONS AND CLINICAL RELEVANCE Both stress techniques induced stifle joint subluxation following CrCL transection that was measurable by use of MRI, suggesting that both methods may be further evaluated for clinical use.
Resumo:
BACKGROUND AND PURPOSE The DRAGON score predicts functional outcome in the hyperacute phase of intravenous thrombolysis treatment of ischemic stroke patients. We aimed to validate the score in a large multicenter cohort in anterior and posterior circulation. METHODS Prospectively collected data of consecutive ischemic stroke patients who received intravenous thrombolysis in 12 stroke centers were merged (n=5471). We excluded patients lacking data necessary to calculate the score and patients with missing 3-month modified Rankin scale scores. The final cohort comprised 4519 eligible patients. We assessed the performance of the DRAGON score with area under the receiver operating characteristic curve in the whole cohort for both good (modified Rankin scale score, 0-2) and miserable (modified Rankin scale score, 5-6) outcomes. RESULTS Area under the receiver operating characteristic curve was 0.84 (0.82-0.85) for miserable outcome and 0.82 (0.80-0.83) for good outcome. Proportions of patients with good outcome were 96%, 93%, 78%, and 0% for 0 to 1, 2, 3, and 8 to 10 score points, respectively. Proportions of patients with miserable outcome were 0%, 2%, 4%, 89%, and 97% for 0 to 1, 2, 3, 8, and 9 to 10 points, respectively. When tested separately for anterior and posterior circulation, there was no difference in performance (P=0.55); areas under the receiver operating characteristic curve were 0.84 (0.83-0.86) and 0.82 (0.78-0.87), respectively. No sex-related difference in performance was observed (P=0.25). CONCLUSIONS The DRAGON score showed very good performance in the large merged cohort in both anterior and posterior circulation strokes. The DRAGON score provides rapid estimation of patient prognosis and supports clinical decision-making in the hyperacute phase of stroke care (eg, when invasive add-on strategies are considered).
Resumo:
In patients diagnosed with pharmaco-resistant epilepsy, cerebral areas responsible for seizure generation can be defined by performing implantation of intracranial electrodes. The identification of the epileptogenic zone (EZ) is based on visual inspection of the intracranial electroencephalogram (IEEG) performed by highly qualified neurophysiologists. New computer-based quantitative EEG analyses have been developed in collaboration with the signal analysis community to expedite EZ detection. The aim of the present report is to compare different signal analysis approaches developed in four different European laboratories working in close collaboration with four European Epilepsy Centers. Computer-based signal analysis methods were retrospectively applied to IEEG recordings performed in four patients undergoing pre-surgical exploration of pharmaco-resistant epilepsy. The four methods elaborated by the different teams to identify the EZ are based either on frequency analysis, on nonlinear signal analysis, on connectivity measures or on statistical parametric mapping of epileptogenicity indices. All methods converge on the identification of EZ in patients that present with fast activity at seizure onset. When traditional visual inspection was not successful in detecting EZ on IEEG, the different signal analysis methods produced highly discordant results. Quantitative analysis of IEEG recordings complement clinical evaluation by contributing to the study of epileptogenic networks during seizures. We demonstrate that the degree of sensitivity of different computer-based methods to detect the EZ in respect to visual EEG inspection depends on the specific seizure pattern.
Resumo:
BACKGROUND & Aims: Standardized instruments are needed to assess the activity of eosinophilic esophagitis (EoE), to provide endpoints for clinical trials and observational studies. We aimed to develop and validate a patient-reported outcome (PRO) instrument and score, based on items that could account for variations in patients' assessments of disease severity. We also evaluated relationships between patients' assessment of disease severity and EoE-associated endoscopic, histologic, and laboratory findings. METHODS We collected information from 186 patients with EoE in Switzerland and the US (69.4% male; median age, 43 years) via surveys (n = 135), focus groups (n = 27), and semi-structured interviews (n = 24). Items were generated for the instruments to assess biologic activity based on physician input. Linear regression was used to quantify the extent to which variations in patient-reported disease characteristics could account for variations in patients' assessment of EoE severity. The PRO instrument was prospectively used in 153 adult patients with EoE (72.5% male; median age, 38 years), and validated in an independent group of 120 patients with EoE (60.8% male; median age, 40.5 years). RESULTS Seven PRO factors that are used to assess characteristics of dysphagia, behavioral adaptations to living with dysphagia, and pain while swallowing accounted for 67% of the variation in patients' assessment of disease severity. Based on statistical consideration and patient input, a 7-day recall period was selected. Highly active EoE, based on endoscopic and histologic findings, was associated with an increase in patient-assessed disease severity. In the validation study, the mean difference between patient assessment of EoE severity and PRO score was 0.13 (on a scale from 0 to 10). CONCLUSIONS We developed and validated an EoE scoring system based on 7 PRO items that assesses symptoms over a 7-day recall period. Clinicaltrials.gov number: NCT00939263.
Resumo:
BACKGROUND Recently, two simple clinical scores were published to predict survival in trauma patients. Both scores may successfully guide major trauma triage, but neither has been independently validated in a hospital setting. METHODS This is a cohort study with 30-day mortality as the primary outcome to validate two new trauma scores-Mechanism, Glasgow Coma Scale (GCS), Age, and Pressure (MGAP) score and GCS, Age and Pressure (GAP) score-using data from the UK Trauma Audit and Research Network. First, an assessment of discrimination, using the area under the receiver operating characteristic (ROC) curve, and calibration, comparing mortality rates with those originally published, were performed. Second, we calculated sensitivity, specificity, predictive values, and likelihood ratios for prognostic score performance. Third, we propose new cutoffs for the risk categories. RESULTS A total of 79,807 adult (≥16 years) major trauma patients (2000-2010) were included; 5,474 (6.9%) died. Mean (SD) age was 51.5 (22.4) years, median GCS score was 15 (interquartile range, 15-15), and median Injury Severity Score (ISS) was 9 (interquartile range, 9-16). More than 50% of the patients had a low-risk GAP or MGAP score (1% mortality). With regard to discrimination, areas under the ROC curve were 87.2% for GAP score (95% confidence interval, 86.7-87.7) and 86.8% for MGAP score (95% confidence interval, 86.2-87.3). With regard to calibration, 2,390 (3.3%), 1,900 (28.5%), and 1,184 (72.2%) patients died in the low, medium, and high GAP risk categories, respectively. In the low- and medium-risk groups, these were almost double the previously published rates. For MGAP, 1,861 (2.8%), 1,455 (15.2%), and 2,158 (58.6%) patients died in the low-, medium-, and high-risk categories, consonant with results originally published. Reclassifying score point cutoffs improved likelihood ratios, sensitivity and specificity, as well as areas under the ROC curve. CONCLUSION We found both scores to be valid triage tools to stratify emergency department patients, according to their risk of death. MGAP calibrated better, but GAP slightly improved discrimination. The newly proposed cutoffs better differentiate risk classification and may therefore facilitate hospital resource allocation. LEVEL OF EVIDENCE Prognostic study, level II.
Resumo:
OBJECTIVES This study aimed to update the Logistic Clinical SYNTAX score to predict 3-year survival after percutaneous coronary intervention (PCI) and compare the performance with the SYNTAX score alone. BACKGROUND The SYNTAX score is a well-established angiographic tool to predict long-term outcomes after PCI. The Logistic Clinical SYNTAX score, developed by combining clinical variables with the anatomic SYNTAX score, has been shown to perform better than the SYNTAX score alone in predicting 1-year outcomes after PCI. However, the ability of this score to predict long-term survival is unknown. METHODS Patient-level data (N = 6,304, 399 deaths within 3 years) from 7 contemporary PCI trials were analyzed. We revised the overall risk and the predictor effects in the core model (SYNTAX score, age, creatinine clearance, and left ventricular ejection fraction) using Cox regression analysis to predict mortality at 3 years. We also updated the extended model by combining the core model with additional independent predictors of 3-year mortality (i.e., diabetes mellitus, peripheral vascular disease, and body mass index). RESULTS The revised Logistic Clinical SYNTAX models showed better discriminative ability than the anatomic SYNTAX score for the prediction of 3-year mortality after PCI (c-index: SYNTAX score, 0.61; core model, 0.71; and extended model, 0.73 in a cross-validation procedure). The extended model in particular performed better in differentiating low- and intermediate-risk groups. CONCLUSIONS Risk scores combining clinical characteristics with the anatomic SYNTAX score substantially better predict 3-year mortality than the SYNTAX score alone and should be used for long-term risk stratification of patients undergoing PCI.
Resumo:
BACKGROUND Retinal optical coherence tomography (OCT) permits quantification of retinal layer atrophy relevant to assessment of neurodegeneration in multiple sclerosis (MS). Measurement artefacts may limit the use of OCT to MS research. OBJECTIVE An expert task force convened with the aim to provide guidance on the use of validated quality control (QC) criteria for the use of OCT in MS research and clinical trials. METHODS A prospective multi-centre (n = 13) study. Peripapillary ring scan QC rating of an OCT training set (n = 50) was followed by a test set (n = 50). Inter-rater agreement was calculated using kappa statistics. Results were discussed at a round table after the assessment had taken place. RESULTS The inter-rater QC agreement was substantial (kappa = 0.7). Disagreement was found highest for judging signal strength (kappa = 0.40). Future steps to resolve these issues were discussed. CONCLUSION Substantial agreement for QC assessment was achieved with aid of the OSCAR-IB criteria. The task force has developed a website for free online training and QC certification. The criteria may prove useful for future research and trials in MS using OCT as a secondary outcome measure in a multi-centre setting.
Resumo:
Introduction To meet the quality standards for high-stakes OSCEs, it is necessary to ensure high quality standardized performance of the SPs involved.[1] One of the ways this can be assured is through the assessment of the quality of SPs` performance in training and during the assessment. There is some literature concerning validated instruments that have been used to assess SP performance in formative contexts but very little related to high stakes contexts.[2], [3], [4]. Content and structure During this workshop different approaches to quality control for SPs` performance, developed in medicine, pharmacy and nursing OSCEs, will be introduced. Participants will have the opportunity to use these approaches in simulated interactions. Advantages and disadvantages of these approaches will be discussed. Anticipated outcomes By the end of this session, participants will be able to discuss the rationale for quality control of SPs` performance in high stakes OSCEs, outline key factors in creating strategies for quality control, identify various strategies for assuring quality control, and reflect on applications to their own practice. Who should attend The workshop is designed for those interested in quality assurance of SP performance in high stakes OSCEs. Level All levels are welcome. References Adamo G. 2003. Simulated and standardized patients in OSCEs: achievements and challenges:1992-2003. Med Teach. 25(3), 262- 270. Wind LA, Van Dalen J, Muijtjens AM, Rethans JJ. Assessing simulated patients in an educational setting: the MaSP (Maastricht Assessment of Simulated Patients). Med Educ 2004, 38(1):39-44. Bouter S, van Weel-Baumgarten E, Bolhuis S. Construction and validation of the Nijmegen Evaluation of the Simulated Patient (NESP): Assessing Simulated Patients' ability to role-play and provide feedback to students. Acad Med: Journal of the Association of American Medical Colleges 2012. May W, Fisher D, Souder D: Development of an instrument to measure the quality of standardized/simulated patient verbal feedback. Med Educ 2012, 2(1).
Resumo:
BACKGROUND AND AIMS Inflammatory bowel disease (IBD) frequently manifests during childhood and adolescence. For providing and understanding a comprehensive picture of a patients' health status, health-related quality of life (HRQoL) instruments are an essential complement to clinical symptoms and functional limitations. Currently, the IMPACT-III questionnaire is one of the most frequently used disease-specific HRQoL instrument among patients with IBD. However, there is a lack of studies examining the validation and reliability of this instrument. METHODS 146 paediatric IBD patients from the multicenter Swiss IBD paediatric cohort study database were included in the study. Medical and laboratory data were extracted from the hospital records. HRQoL data were assessed by means of standardized questionnaires filled out by the patients in a face-to-face interview. RESULTS The original six IMPACT-III domain scales could not be replicated in the current sample. A principal component analysis with the extraction of four factor scores revealed the most robust solution. The four factors indicated good internal reliability (Cronbach's alpha=.64-.86), good concurrent validity measured by correlations with the generic KIDSCREEN-27 scales and excellent discriminant validity for the dimension of physical functioning measured by HRQoL differences for active and inactive severity groups (p<.001, d=1.04). CONCLUSIONS This study with Swiss children with IBD indicates good validity and reliability for the IMPACT-III questionnaire. However, our findings suggest a slightly different factor structure than originally proposed. The IMPACT-III questionnaire can be recommended for its use in clinical practice. The factor structure should be further examined in other samples.
Resumo:
BACKGROUND CONTEXT The nerve root sedimentation sign in transverse magnetic resonance imaging has been shown to discriminate well between selected patients with and without lumbar spinal stenosis (LSS), but the performance of this new test, when used in a broader patient population, is not yet known. PURPOSE To evaluate the clinical performance of the nerve root sedimentation sign in detecting central LSS above L5 and to determine its potential significance for treatment decisions. STUDY DESIGN Retrospective cohort study. PATIENT SAMPLE One hundred eighteen consecutive patients with suspected LSS (52% women, median age 62 years) with a median follow-up of 24 months. OUTCOME MEASURES Oswestry disability index (ODI) and back and leg pain relief. METHODS We performed a clinical test validation study to assess the clinical performance of the sign by measuring its association with health outcomes. Subjects were patients referred to our orthopedic spine unit from 2004 to 2007 before the sign had been described. Based on clinical and radiological diagnostics, patients had been treated with decompression surgery or nonsurgical treatment. Changes in the ODI and pain from baseline to 24-month follow-up were compared between sedimentation sign positives and negatives in both treatment groups. RESULTS Sixty-nine patients underwent surgery. Average baseline ODI in the surgical group was 54.7%, and the sign was positive in 39 patients (mean ODI improvement 29.0 points) and negative in 30 (ODI improvement 28.4), with no statistically significant difference in ODI and pain improvement between groups. In the 49 patients of the nonsurgical group, mean baseline ODI was 42.4%; the sign was positive in 18 (ODI improvement 0.6) and negative in 31 (ODI improvement 17.7). A positive sign was associated with a smaller ODI and back pain improvement than negative signs (both p<.01 on t test). CONCLUSIONS In patients commonly treated with decompression surgery, the sedimentation sign does not appear to predict surgical outcome. In nonsurgically treated patients, a positive sign is associated with more limited improvement. In these cases, surgery might be effective, but this needs investigation in prospective randomized trials (Australian New Zealand Clinical Trial Registry, number ACTRN12610000567022).