872 resultados para Patient based-outcomes
Resumo:
Dissertação de mestrado em Estatística
Resumo:
STUDY DESIGN: Prospective, controlled, observational outcome study using clinical, radiographic, and patient/physician-based questionnaire data, with patient outcomes at 12 months follow-up. OBJECTIVE: To validate appropriateness criteria for low back surgery. SUMMARY OF BACKGROUND DATA: Most surgical treatment failures are attributed to poor patient selection, but no widely accepted consensus exists on detailed indications for appropriate surgery. METHODS: Appropriateness criteria for low back surgery have been developed by a multispecialty panel using the RAND appropriateness method. Based on panel criteria, a prospective study compared outcomes of patients appropriately and inappropriately treated at a single institution with 12 months follow-up assessment. Included were patients with low back pain and/or sciatica referred to the neurosurgical department. Information about symptoms, neurologic signs, the health-related quality of life (SF-36), disability status (Roland-Morris), and pain intensity (VAS) was assessed at baseline, at 6 months, and at 12 months follow-up. The appropriateness criteria were administered prospectively to each clinical situation and outside of the clinical setting, with the surgeon and patients blinded to the results of the panel decision. The patients were further stratified into 2 groups: appropriate treatment group (ATG) and inappropriate treatment group (ITG). RESULTS: Overall, 398 patients completed all forms at 12 months. Treatment was considered appropriate for 365 participants and inappropriate for 33 participants. The mean improvement in the SF-36 physical component score at 12 months was significantly higher in the ATG (mean: 12.3 points) than in the ITG (mean: 6.8 points) (P = 0.01), as well as the mean improvement in the SF-36 mental component score (ATG mean: 5.0 points; ITG mean: -0.5 points) (P = 0.02). Improvement was also significantly higher in the ATG for the mean VAS back pain (ATG mean: 2.3 points; ITG mean: 0.8 points; P = 0.02) and Roland-Morris disability score (ATG mean: 7.7 points; ITG mean: 4.2 points; P = 0.004). The ATG also had a higher improvement in mean VAS for sciatica (4.0 points) than the ITG (2.8 points), but the difference was not significant (P = 0.08). The SF-36 General Health score declined in both groups after 12 months, however, the decline was worse in the ITG (mean decline: 8.2 points) than in the ATG (mean decline: 1.2 points) (P = 0.04). Overall, in comparison to ITG patients, ATG patients had significantly higher improvement at 12 months, both statistically and clinically. CONCLUSION: In comparison to previously reported literature, our study is the first to assess the utility of appropriateness criteria for low back surgery at 1-year follow-up with multiple outcome dimensions. Our results confirm the hypothesis that application of appropriateness criteria can significantly improve patient outcomes.
Resumo:
BACKGROUND/AIMS: Prospective studies on factors associated with adverse kidney outcomes in European general populations are scant. Also, few studies consider the potential confounding effect of baseline kidney function. METHODS: We used baseline (2003-2006) and 5-year follow-up data of adults from the general population to evaluate the effect of baseline kidney function and proteinuria on the association of clinical, biological (e.g. uric acid, homocysteine, cytokines), and socioeconomic factors with change in kidney function, rapid decline in kidney function, and incidence of chronic kidney disease (CKD). Estimated glomerular filtration rate (eGFR) and urinary albuminuria-to-creatinine ratio (UACR) were collected. Kidney outcomes were modeled using multivariable regressions. RESULTS: A total of 4,441 subjects were included in the analysis. Among participants without CKD at baseline, 11.4% presented rapid decline in eGFR and/or incident CKD. After adjustment for baseline eGFR and log UACR, only age (Odds Ratio; 1.25 [95%CI 1.18-1.33]), diabetes (OR 1.48 [1.03-2.13]), education (OR middle vs. high 1.51 [1.08-2.11]) and log ultrasensitive CRP (OR 1.16 [1.05-1.22]) were associated with rapid decline in eGFR or incident CKD. Baseline log UACR (OR 1.18 [1.06-1.32]) but not eGFR was associated with rapid decline in eGFR and/or incident CKD. CONCLUSION: In addition to age and diabetes, education and CRP levels are associated with adverse kidney outcomes independently of baseline kidney function.
Resumo:
BACKGROUND: Chest pain is a common complaint in primary care, with coronary heart disease (CHD) being the most concerning of many potential causes. Systematic reviews on the sensitivity and specificity of symptoms and signs summarize the evidence about which of them are most useful in making a diagnosis. Previous meta-analyses are dominated by studies of patients referred to specialists. Moreover, as the analysis is typically based on study-level data, the statistical analyses in these reviews are limited while meta-analyses based on individual patient data can provide additional information. Our patient-level meta-analysis has three unique aims. First, we strive to determine the diagnostic accuracy of symptoms and signs for myocardial ischemia in primary care. Second, we investigate associations between study- or patient-level characteristics and measures of diagnostic accuracy. Third, we aim to validate existing clinical prediction rules for diagnosing myocardial ischemia in primary care. This article describes the methods of our study and six prospective studies of primary care patients with chest pain. Later articles will describe the main results. METHODS/DESIGN: We will conduct a systematic review and IPD meta-analysis of studies evaluating the diagnostic accuracy of symptoms and signs for diagnosing coronary heart disease in primary care. We will perform bivariate analyses to determine the sensitivity, specificity and likelihood ratios of individual symptoms and signs and multivariate analyses to explore the diagnostic value of an optimal combination of all symptoms and signs based on all data of all studies. We will validate existing clinical prediction rules from each of the included studies by calculating measures of diagnostic accuracy separately by study. DISCUSSION: Our study will face several methodological challenges. First, the number of studies will be limited. Second, the investigators of original studies defined some outcomes and predictors differently. Third, the studies did not collect the same standard clinical data set. Fourth, missing data, varying from partly missing to fully missing, will have to be dealt with.Despite these limitations, we aim to summarize the available evidence regarding the diagnostic accuracy of symptoms and signs for diagnosing CHD in patients presenting with chest pain in primary care. REVIEW REGISTRATION: Centre for Reviews and Dissemination (University of York): CRD42011001170.
Resumo:
OBJECTIVE:: The purpose of this study was to assess outcomes and indications in a large cohort of patients who underwent liver transplantation (LT) for liver metastases (LM) from neuroendocrine tumors (NET) over a 27-year period. BACKGROUND:: LT for NET remains controversial due to the absence of clear selection criteria and the scarcity and heterogeneity of reported cases. METHODS:: This retrospective multicentric study included 213 patients who underwent LT for NET performed in 35 centers in 11 European countries between 1982 and 2009. One hundred seven patients underwent transplantation before 2000 and 106 after 2000. Mean age at the time of LT was 46 years. Half of the patients presented hormone secretion and 55% had hepatomegaly. Before LT, 83% of patients had undergone surgical treatment of the primary tumor and/or LM and 76% had received chemotherapy. The median interval between diagnosis of LM and LT was 25 months (range, 1-149 months). In addition to LT, 24 patients underwent major resection procedures and 30 patients underwent minor resection procedures. RESULTS:: Three-month postoperative mortality was 10%. At 5 years after LT, overall survival (OS) was 52% and disease-free survival was 30%. At 5 years from diagnosis of LM, OS was 73%. Multivariate analysis identified 3 predictors of poor outcome, that is, major resection in addition to LT, poor tumor differentiation, and hepatomegaly. Since 2000, 5-year OS has increased to 59% in relation with fewer patients presenting poor prognostic factors. Multivariate analysis of the 106 cases treated since 2000 identified the following predictors of poor outcome: hepatomegaly, age more than 45 years, and any amount of resection concurrent with LT. CONCLUSIONS:: LT is an effective treatment of unresectable LM from NET. Patient selection based on the aforementioned predictors can achieve a 5-year OS between 60% and 80%. However, use of overly restrictive criteria may deny LT to some patients who could benefit. Optimal timing for LT in patients with stable versus progressive disease remains unclear.
Resumo:
Background: Many studies have found considerable variations in the resource intensity of physical therapy episodes. Although they have identified several patient-and provider-related factors, few studies have examined their relative explanatory power. We sought to quantify the contribution of patients and providers to these differences and examine how effective Swiss regulations are (nine-session ceiling per prescription and bonus for first treatments). Methods: Our sample consisted of 87,866 first physical therapy episodes performed by 3,365 physiotherapists based on referrals by 6,131 physicians. We modeled the number of visits per episode using a multilevel log linear regression with crossed random effects for physiotherapists and physicians and with fixed effects for cantons. The three-level explanatory variables were patient, physiotherapist and physician characteristics. Results: The median number of sessions was nine (interquartile range 6-13). Physical therapy use increased with age, women, higher health care costs, lower deductibles, surgery and specific conditions. Use rose with the share of nine-session episodes among physiotherapists or physicians, but fell with the share of new treatments. Geographical area had no influence. Most of the variance was explained at the patient level, but the available factors explained only 4% thereof. Physiotherapists and physicians explained only 6% and 5% respectively of the variance, although the available factors explained most of this variance. Regulations were the most powerful factors. Conclusion: Against the backdrop of abundant physical therapy supply, Swiss financial regulations did not restrict utilization. Given that patient-related factors explained most of the variance, this group should be subject to closer scrutiny. Moreover, further research is needed on the determinants of patient demand.
Resumo:
Urinary schistosomiasis remains a significant burden for Africa and the Middle East. The success of population-based control programs will depend on their impact, over many years, on Schistosoma haematobium reinfection and associated disease. In a multi-year (1984-1992) control program in Kenya, we examined risk for S. haematobium reinfection and late disease during and after annual school-based treatment. In this setting, long-term risk of new infection was independently associated with location, age, hematuria, and incomplete treatment, but not with sex or frequency of water contact. Thus, very local environmental features and age-related factors played an important role in S. haematobium transmission, such that population-based control programs should optimally tailor their efforts to local conditions on a village-by-village basis. In 2001-2002, the late benefits of earlier participation in school-based antischistosomal therapy were estimated in a cohort of formerly-treated adult residents compared to never-treated adults from the same villages. Among age-matched subjects, current infection prevalence was lower among those who had received remote therapy. In addition, prevalence of bladder abnormality was lower in the treated group, who were free of severe bladder disease. Treatment of affected adults resulted in rapid resolution of infection and any detectable bladder abnormalities. We conclude that continued treatment into adulthood, as well as efforts at long-term prevention of infection (transmission control) are necessary to achieve optimal morbidity control in affected communities.
Resumo:
BACKGROUND: In numerous high-risk medical and surgical conditions, a greater volume of patients undergoing treatment in a given setting or facility is associated with better survival. For patients with pulmonary embolism, the relation between the number of patients treated in a hospital (volume) and patient outcome is unknown. METHODS: We studied discharge records from 186 acute care hospitals in Pennsylvania for a total of 15 531 patients for whom the primary diagnosis was pulmonary embolism. The study outcomes were all-cause mortality in hospital and within 30 days after presentation for pulmonary embolism and the length of hospital stay. We used logistic models to study the association between hospital volume and 30-day mortality and discrete survival models to study the association between in-hospital mortality and time to hospital discharge. RESULTS: The median annual hospital volume for pulmonary embolism was 20 patients (interquartile range 10-42). Overall in-hospital mortality was 6.0%, whereas 30-day mortality was 9.3%. In multivariable analysis, very-high-volume hospitals (> or = 42 cases per year) had a significantly lower odds of in-hospital death (odds ratio [OR] 0.71, 95% confidence interval [CI] 0.51-0.99) and of 30-day death (OR 0.71, 95% CI 0.54-0.92) than very-low-volume hospitals (< 10 cases per year). Although patients in the very-high-volume hospitals had a slightly longer length of stay than those in the very-low-volume hospitals (mean difference 0.7 days), there was no association between volume and length of stay. INTERPRETATION: In hospitals with a high volume of cases, pulmonary embolism was associated with lower short-term mortality. Further research is required to determine the causes of the relation between volume and outcome for patients with pulmonary embolism.
Resumo:
PURPOSE: The prevalence of anaplastic lymphoma kinase (ALK) gene fusion (ALK positivity) in early-stage non-small-cell lung cancer (NSCLC) varies by population examined and detection method used. The Lungscape ALK project was designed to address the prevalence and prognostic impact of ALK positivity in resected lung adenocarcinoma in a primarily European population. METHODS: Analysis of ALK status was performed by immunohistochemistry (IHC) and fluorescent in situ hybridization (FISH) in tissue sections of 1,281 patients with adenocarcinoma in the European Thoracic Oncology Platform Lungscape iBiobank. Positive patients were matched with negative patients in a 1:2 ratio, both for IHC and for FISH testing. Testing was performed in 16 participating centers, using the same protocol after passing external quality assessment. RESULTS: Positive ALK IHC staining was present in 80 patients (prevalence of 6.2%; 95% CI, 4.9% to 7.6%). Of these, 28 patients were ALK FISH positive, corresponding to a lower bound for the prevalence of FISH positivity of 2.2%. FISH specificity was 100%, and FISH sensitivity was 35.0% (95% CI, 24.7% to 46.5%), with a sensitivity value of 81.3% (95% CI, 63.6% to 92.8%) for IHC 2+/3+ patients. The hazard of death for FISH-positive patients was lower than for IHC-negative patients (P = .022). Multivariable models, adjusted for patient, tumor, and treatment characteristics, and matched cohort analysis confirmed that ALK FISH positivity is a predictor for better overall survival (OS). CONCLUSION: In this large cohort of surgically resected lung adenocarcinomas, the prevalence of ALK positivity was 6.2% using IHC and at least 2.2% using FISH. A screening strategy based on IHC or H-score could be envisaged. ALK positivity (by either IHC or FISH) was related to better OS.
Resumo:
The lack of knowledge regarding polycystic hydatid disease results in delayed or even incorrect diagnosis. The lack of systematic information regarding treatment also makes it difficult to assess the results and prognosis in patients with peritoneal and hepatic lesions caused by Echinococcus vogeli. Here we describe the clinical features of patients, propose a radiological classification protocol and describe a therapeutic option for the treatment of hydatid disease that previously had only been used for cases of cystic echinococcosis (Echinococcus granulosus). A prospective cohort study was initiated in 1999 and by 2009 the study included 60 patients. These patients were classified according to the PNM classification (parasite lesion, neighbouring organ invasion and metastases) and placed in one of three therapeutic modalities: (i) chemotherapy with albendazole at a dose of 10 mg/kg/day, (ii) surgical removal of cysts or (iii) percutaneous puncture of the cysts via puncture, aspiration, injection and re-aspiration (PAIR). The results were stratified according to therapeutic outcome: "cure", "clinical improvement", "no improvement", "death" or "no information". The PNM classification was useful in indicating the appropriate therapy in cases of polycystic hydatid disease. In conclusion, surgical therapy produced the best clinical results of all the therapies studied based on "cure" and "clinical improvement" outcomes. The use of PAIR for treatment requires additional study.
Resumo:
BACKGROUND In the last decades the presence of social inequalities in diabetes care has been observed in multiple countries, including Spain. These inequalities have been at least partially attributed to differences in diabetes self-management behaviours. Communication problems during medical consultations occur more frequently to patients with a lower educational level. The purpose of this cluster randomized trial is to determine whether an intervention implemented in a General Surgery, based in improving patient-provider communication, results in a better diabetes self-management in patients with lower educational level. A secondary objective is to assess whether telephone reinforcement enhances the effect of such intervention. We report the design and implementation of this on-going study. METHODS/DESIGN The study is being conducted in a General Practice located in a deprived neighbourhood of Granada, Spain. Diabetic patients 18 years old or older with a low educational level and inadequate glycaemic control (HbA1c > 7%) were recruited. General Practitioners (GPs) were randomised to three groups: intervention A, intervention B and control group. GPs allocated to intervention groups A and B received training in communication skills and are providing graphic feedback about glycosylated haemoglobin levels. Patients whose GPs were allocated to group B are additionally receiving telephone reinforcement whereas patients from the control group are receiving usual care. The described interventions are being conducted during 7 consecutive medical visits which are scheduled every three months. The main outcome measure will be HbA1c; blood pressure, lipidemia, body mass index and waist circumference will be considered as secondary outcome measures. Statistical analysis to evaluate the effectiveness of the interventions will include multilevel regression analysis with three hierarchical levels: medical visit level, patient level and GP level. DISCUSSION The results of this study will provide new knowledge about possible strategies to promote a better diabetes self-management in a particularly vulnerable group. If effective, this low cost intervention will have the potential to be easily incorporated into routine clinical practice, contributing to decrease health inequalities in diabetic patients. TRIAL REGISTRATION Clinical Trials U.S. National Institutes of Health, NCT01849731.
Resumo:
Long-term outcomes after kidney transplantation remain suboptimal, despite the great achievements observed in recent years with the use of modern immunosuppressive drugs. Currently, the calcineurin inhibitors (CNI) cyclosporine and tacrolimus remain the cornerstones of immunosuppressive regimens in many centers worldwide, regardless of their well described side-effects, including nephrotoxicity. In this article, we review recent CNI-minimization strategies in kidney transplantation, while emphasizing on the importance of long-term follow-up and patient monitoring. Finally, accumulating data indicate that low-dose CNI-based regimens would provide an interesting balance between efficacy and toxicity.
Resumo:
INTRODUCTION: Optimal identification of subtle cognitive impairment in the primary care setting requires a very brief tool combining (a) patients' subjective impairments, (b) cognitive testing, and (c) information from informants. The present study developed a new, very quick and easily administered case-finding tool combining these assessments ('BrainCheck') and tested the feasibility and validity of this instrument in two independent studies. METHODS: We developed a case-finding tool comprised of patient-directed (a) questions about memory and depression and (b) clock drawing, and (c) the informant-directed 7-item version of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Feasibility study: 52 general practitioners rated the feasibility and acceptance of the patient-directed tool. Validation study: An independent group of 288 Memory Clinic patients (mean ± SD age = 76.6 ± 7.9, education = 12.0 ± 2.6; 53.8% female) with diagnoses of mild cognitive impairment (n = 80), probable Alzheimer's disease (n = 185), or major depression (n = 23) and 126 demographically matched, cognitively healthy volunteer participants (age = 75.2 ± 8.8, education = 12.5 ± 2.7; 40% female) partook. All patient and healthy control participants were administered the patient-directed tool, and informants of 113 patient and 70 healthy control participants completed the very short IQCODE. RESULTS: Feasibility study: General practitioners rated the patient-directed tool as highly feasible and acceptable. Validation study: A Classification and Regression Tree analysis generated an algorithm to categorize patient-directed data which resulted in a correct classification rate (CCR) of 81.2% (sensitivity = 83.0%, specificity = 79.4%). Critically, the CCR of the combined patient- and informant-directed instruments (BrainCheck) reached nearly 90% (that is 89.4%; sensitivity = 97.4%, specificity = 81.6%). CONCLUSION: A new and very brief instrument for general practitioners, 'BrainCheck', combined three sources of information deemed critical for effective case-finding (that is, patients' subject impairments, cognitive testing, informant information) and resulted in a nearly 90% CCR. Thus, it provides a very efficient and valid tool to aid general practitioners in deciding whether patients with suspected cognitive impairments should be further evaluated or not ('watchful waiting').
Resumo:
PURPOSE: Currently, many pre-conditions are regarded as relative or absolute contraindications for lumbar total disc replacement (TDR). Radiculopathy is one among them. In Switzerland it is left to the surgeon's discretion when to operate if he adheres to a list of pre-defined indications. Contraindications, however, are less clearly specified. We hypothesized that, the extent of pre-operative radiculopathy results in different benefits for patients treated with mono-segmental lumbar TDR. We used patient perceived leg pain and its correlation with physician recorded radiculopathy for creating the patient groups to be compared. METHODS: The present study is based on the dataset of SWISSspine, a government mandated health technology assessment registry. Between March 2005 and April 2009, 577 patients underwent either mono- or bi-segmental lumbar TDR, which was documented in a prospective observational multicenter mode. A total of 416 cases with a mono-segmental procedure were included in the study. The data collection consisted of pre-operative and follow-up data (physician based) and clinical outcomes (NASS form, EQ-5D). A receiver operating characteristic (ROC) analysis was conducted with patients' self-indicated leg pain and the surgeon-based diagnosis "radiculopathy", as marked on the case report forms. As a result, patients were divided into two groups according to the severity of leg pain. The two groups were compared with regard to the pre-operative patient characteristics and pre- and post-operative pain on Visual Analogue Scale (VAS) and quality of life using general linear modeling. RESULTS: The optimal ROC model revealed a leg pain threshold of 40 ≤ VAS > 40 for the absence or the presence of "radiculopathy". Demographics in the resulting two groups were well comparable. Applying this threshold, the mean pre-operative leg pain level was 16.5 points in group 1 and 68.1 points in group 2 (p < 0.001). Back pain levels differed less with 63.6 points in group 1 and 72.6 in group 2 (p < 0.001). Pre-operative quality of life showed considerable differences with an 0.44 EQ-5D score in group 1 and 0.29 in group 2 (p < 0.001, possible score range -0.6 to 1). At a mean follow-up time of 8 months, group 1 showed a mean leg pain improvement of 3.6 points and group 2 of 41.1 points (p < 0.001). Back pain relief was 35.6 and 39.1 points, respectively (p = 0.27). EQ-5D score improvement was 0.27 in group 1 and 0.41 in group 2 (p = 0.11). CONCLUSIONS: Patients labeled as having radiculopathy (group 2) do mostly have pre-operative leg pain levels ≥ 40. Applying this threshold, the patients with pre-operative leg pain do also have more severe back pain and a considerably lower quality of life. Their net benefit from the lumbar TDR is higher and they do have similar post-operative back and leg pain levels as well as the quality of life as patients without pre-operative leg pain. Although randomized controlled trials are required to confirm these findings, they put leg pain and radiculopathy into perspective as absolute contraindications for TDR.