957 resultados para Score Cards


Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE To assess whether palliative primary tumor resection in colorectal cancer patients with incurable stage IV disease is associated with improved survival. BACKGROUND There is a heated debate regarding whether or not an asymptomatic primary tumor should be removed in patients with incurable stage IV colorectal disease. METHODS Stage IV colorectal cancer patients were identified in the Surveillance, Epidemiology, and End Results database between 1998 and 2009. Patients undergoing surgery to metastatic sites were excluded. Overall survival and cancer-specific survival were compared between patients with and without palliative primary tumor resection using risk-adjusted Cox proportional hazard regression models and stratified propensity score methods. RESULTS Overall, 37,793 stage IV colorectal cancer patients were identified. Of those, 23,004 (60.9%) underwent palliative primary tumor resection. The rate of patients undergoing palliative primary cancer resection decreased from 68.4% in 1998 to 50.7% in 2009 (P < 0.001). In Cox regression analysis after propensity score matching primary cancer resection was associated with a significantly improved overall survival [hazard ratio (HR) of death = 0.40, 95% confidence interval (CI) = 0.39-0.42, P < 0.001] and cancer-specific survival (HR of death = 0.39, 95% CI = 0.38-0.40, P < 0.001). The benefit of palliative primary cancer resection persisted during the time period 1998 to 2009 with HRs equal to or less than 0.47 for both overall and cancer-specific survival. CONCLUSIONS On the basis of this population-based cohort of stage IV colorectal cancer patients, palliative primary tumor resection was associated with improved overall and cancer-specific survival. Therefore, the dogma that an asymptomatic primary tumor never should be resected in patients with unresectable colorectal cancer metastases must be questioned.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVES To improve malnutrition awareness and management in our department of general internal medicine; to assess patients' nutritional risk; and to evaluate whether an online educational program leads to an increase in basic knowledge and more frequent nutritional therapies. METHODS A prospective pre-post intervention study at a university department of general internal medicine was conducted. Nutritional screening using Nutritional Risk Score 2002 (NRS 2002) was performed, and prescriptions of nutritional therapies were assessed. The intervention included an online learning program and a pocket card for all residents, who had to fill in a multiple-choice questions (MCQ) test about basic nutritional knowledge before and after the intervention. RESULTS A total of 342 patients were included in the preintervention phase, and 300 were in the postintervention phase. In the preintervention phase, 54.1% were at nutritional risk (NRS 2002 ≥3) compared with 61.7% in the postintervention phase. There was no increase in the prescription of nutritional therapies (18.7% versus 17.0%). Forty-nine and 41 residents (response rate 58% and 48%) filled in the MCQ test before and after the intervention, respectively. The mean percentage of correct answers was 55.6% and 59.43%, respectively (which was not significant). Fifty of 84 residents completed the online program. The residents who participated in the whole program scored higher on the second MCQ test (63% versus 55% correct answers, P = 0.031). CONCLUSIONS Despite a high ratio of malnourished patients, the nutritional intervention, as assessed by nutritional prescriptions, is insufficient. However, the simple educational program via Internet and usage of NRS 2002 pocket cards did not improve either malnutrition awareness or nutritional treatment. More sophisticated educational systems to fight malnutrition are necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND & AIMS Cirrhotic patients with acute decompensation frequently develop acute-on-chronic liver failure (ACLF), which is associated with high mortality rates. Recently, a specific score for these patients has been developed using the CANONIC study database. The aims of this study were to develop and validate the CLIF-C AD score, a specific prognostic score for hospitalised cirrhotic patients with acute decompensation (AD), but without ACLF, and to compare this with the Child-Pugh, MELD, and MELD-Na scores. METHODS The derivation set included 1016 CANONIC study patients without ACLF. Proportional hazards models considering liver transplantation as a competing risk were used to identify score parameters. Estimated coefficients were used as relative weights to compute the CLIF-C ADs. External validation was performed in 225 cirrhotic AD patients. CLIF-C ADs was also tested for sequential use. RESULTS Age, serum sodium, white-cell count, creatinine and INR were selected as the best predictors of mortality. The C-index for prediction of mortality was better for CLIF-C ADs compared with Child-Pugh, MELD, and MELD-Nas at predicting 3- and 12-month mortality in the derivation, internal validation and the external dataset. CLIF-C ADs improved in its ability to predict 3-month mortality using data from days 2, 3-7, and 8-15 (C-index: 0.72, 0.75, and 0.77 respectively). CONCLUSIONS The new CLIF-C ADs is more accurate than other liver scores in predicting prognosis in hospitalised cirrhotic patients without ACLF. CLIF-C ADs therefore may be used to identify a high-risk cohort for intensive management and a low-risk group that may be discharged early.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE We endeavored to develop an unruptured intracranial aneurysm (UIA) treatment score (UIATS) model that includes and quantifies key factors involved in clinical decision-making in the management of UIAs and to assess agreement for this model among specialists in UIA management and research. METHODS An international multidisciplinary (neurosurgery, neuroradiology, neurology, clinical epidemiology) group of 69 specialists was convened to develop and validate the UIATS model using a Delphi consensus. For internal (39 panel members involved in identification of relevant features) and external validation (30 independent external reviewers), 30 selected UIA cases were used to analyze agreement with UIATS management recommendations based on a 5-point Likert scale (5 indicating strong agreement). Interrater agreement (IRA) was assessed with standardized coefficients of dispersion (vr*) (vr* = 0 indicating excellent agreement and vr* = 1 indicating poor agreement). RESULTS The UIATS accounts for 29 key factors in UIA management. Agreement with UIATS (mean Likert scores) was 4.2 (95% confidence interval [CI] 4.1-4.3) per reviewer for both reviewer cohorts; agreement per case was 4.3 (95% CI 4.1-4.4) for panel members and 4.5 (95% CI 4.3-4.6) for external reviewers (p = 0.017). Mean Likert scores were 4.2 (95% CI 4.1-4.3) for interventional reviewers (n = 56) and 4.1 (95% CI 3.9-4.4) for noninterventional reviewers (n = 12) (p = 0.290). Overall IRA (vr*) for both cohorts was 0.026 (95% CI 0.019-0.033). CONCLUSIONS This novel UIA decision guidance study captures an excellent consensus among highly informed individuals on UIA management, irrespective of their underlying specialty. Clinicians can use the UIATS as a comprehensive mechanism for indicating how a large group of specialists might manage an individual patient with a UIA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The efficacy of cognitive behavioral therapy (CBT) for the treatment of depressive disorders has been demonstrated in many randomized controlled trials (RCTs). This study investigated whether for CBT similar effects can be expected under routine care conditions when the patients are comparable to those examined in RCTs. Method: N=574 CBT patients from an outpatient clinic were stepwise matched to the patients undergoing CBT in the National Institute of Mental Health Treatment of Depression Collaborative Research Program (TDCRP). First, the exclusion criteria of the RCT were applied to the naturalistic sample of the outpatient clinic. Second, propensity score matching (PSM) was used to adjust the remaining naturalistic sample on the basis of baseline covariate distributions. Matched samples were then compared regarding treatment effects using effect sizes, average treatment effect on the treated (ATT) and recovery rates. Results: CBT in the adjusted naturalistic subsample was as effective as in the RCT. However, treatments lasted significantly longer under routine care conditions. Limitations: The samples included only a limited amount of common predictor variables and stemmed from different countries. There might be additional covariates, which could potentially further improve the matching between the samples. Conclusions: CBT for depression in clinical practice might be equally effective as manual-based treatments in RCTs when they are applied to comparable patients. The fact that similar effects under routine conditions were reached with more sessions, however, points to the potential to optimize treatments in clinical practice with respect to their efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biosecurity is crucial for safeguarding livestock from infectious diseases. Despite the plethora of biosecurity recommendations, published scientific evidence on the effectiveness of individual biosecurity measures is limited. The objective of this study was to assess the perception of Swiss experts about the effectiveness and importance of individual on-farm biosecurity measures for cattle and swine farms (31 and 30 measures, respectively). Using a modified Delphi method, 16 Swiss livestock disease specialists (8 for each species) were interviewed. The experts were asked to rank biosecurity measures that were written on cards, by allocating a score from 0 (lowest) to 5 (highest). Experts ranked biosecurity measures based on their importance related to Swiss legislation, feasibility, as well as the effort required for implementation and the benefit of each biosecurity measure. The experts also ranked biosecurity measures based on their effectiveness in preventing an infectious agent from entering and spreading on a farm, solely based on transmission characteristics of specific pathogens. The pathogens considered by cattle experts were those causing Bluetongue (BT), Bovine Viral Diarrhea (BVD), Foot and Mouth Disease (FMD) and Infectious Bovine Rhinotracheitis (IBR). Swine experts expressed their opinion on the pathogens causing African Swine Fever (ASF), Enzootic Pneumonia (EP), Porcine Reproductive and Respiratory Syndrome (PRRS), as well as FMD. For cattle farms, biosecurity measures that improve disease awareness of farmers were ranked as both most important and most effective. For swine farms, the most important and effective measures identified were those related to animal movements. Among all single measures evaluated, education of farmers was perceived by the experts to be the most important and effective for protecting both Swiss cattle and swine farms from disease. The findings of this study provide an important basis for recommendation to farmers and policy makers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION The aim of the study was to identify the appropriate level of Charlson comorbidity index (CCI) in older patients (>70 years) with high-risk prostate cancer (PCa) to achieve survival benefit following radical prostatectomy (RP). METHODS We retrospectively analyzed 1008 older patients (>70 years) who underwent RP with pelvic lymph node dissection for high-risk prostate cancer (preoperative prostate-specific antigen >20 ng/mL or clinical stage ≥T2c or Gleason ≥8) from 14 tertiary institutions between 1988 and 2014. The study population was further grouped into CCI < 2 and ≥2 for analysis. Survival rate for each group was estimated with Kaplan-Meier method and competitive risk Fine-Gray regression to estimate the best explanatory multivariable model. Area under the curve (AUC) and Akaike information criterion were used to identify ideal 'Cut off' for CCI. RESULTS The clinical and cancer characteristics were similar between the two groups. Comparison of the survival analysis using the Kaplan-Meier curve between two groups for non-cancer death and survival estimations for 5 and 10 years shows significant worst outcomes for patients with CCI ≥ 2. In multivariate model to decide the appropriate CCI cut-off point, we found CCI 2 has better AUC and p value in log rank test. CONCLUSION Older patients with fewer comorbidities harboring high-risk PCa appears to benefit from RP. Sicker patients are more likely to die due to non-prostate cancer-related causes and are less likely to benefit from RP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE To compare patient outcomes and complication rates after different decompression techniques or instrumented fusion (IF) in lumbar spinal stenosis (LSS). METHODS The multicentre study was based on Spine Tango data. Inclusion criteria were LSS with a posterior decompression and pre- and postoperative COMI assessment between 3 and 24 months. 1,176 cases were assigned to four groups: (1) laminotomy (n = 642), (2) hemilaminectomy (n = 196), (3) laminectomy (n = 230) and (4) laminectomy combined with an IF (n = 108). Clinical outcomes were achievement of minimum relevant change in COMI back and leg pain and COMI score (2.2 points), surgical and general complications, measures taken due to complications, and reintervention on the index level based on patient information. The inverse propensity score weighting method was used for adjustment. RESULTS Laminotomy, hemilaminectomy and laminectomy were significantly less beneficial than laminectomy in combination with IF regarding leg pain (ORs with 95% CI 0.52, 0.34-0.81; 0.25, 0.15-0.41; 0.44, 0.27-0.72, respectively) and COMI score improvement (ORs with 95% CI 0.51, 0.33-0.81; 0.30, 0.18-0.51; 0.48, 0.29-0.79, respectively). However, the sole decompressions caused significantly fewer surgical (ORs with 95% CI 0.42, 0.26-0.69; 0.33, 0.17-0.63; 0.39, 0.21-0.71, respectively) and general complications (ORs with 95% CI 0.11, 0.04-0.29; 0.03, 0.003-0.41; 0.25, 0.09-0.71, respectively) than laminectomy in combination with IF. Accordingly, the likelihood of required measures was also significantly lower after laminotomy (OR 0.28, 95% CI 0.17-0.46), hemilaminectomy (OR 0.28, 95% CI 0.15-0.53) and after laminectomy (OR 0.39, 95% CI 0.22-0.68) in comparison with laminectomy with IF. The likelihood of a reintervention was not significantly different between the treatment groups. DISCUSSION As already demonstrated in the literature, decompression in patients with LSS is a very effective treatment. Despite better patient outcomes after laminectomy in combination with IF, caution is advised due to higher rates of surgical and general complications and consequent required measures. Based on the current study, laminotomy or laminectomy, rather than hemilaminectomy, is recommendable for minimum relevant pain relief.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main objective of this study was to develop and validate a computer-based statistical algorithm based on a multivariable logistic model that can be translated into a simple scoring system in order to ascertain stroke cases using hospital admission medical records data. This algorithm, the Risk Index Score (RISc), was developed using data collected prospectively by the Brain Attack Surveillance in Corpus Christ (BASIC) project. The validity of the RISc was evaluated by estimating the concordance of scoring system stroke ascertainment to stroke ascertainment accomplished by physician review of hospital admission records. The goal of this study was to develop a rapid, simple, efficient, and accurate method to ascertain the incidence of stroke from routine hospital admission hospital admission records for epidemiologic investigations. ^ The main objectives of this study were to develop and validate a computer-based statistical algorithm based on a multivariable logistic model that could be translated into a simple scoring system to ascertain stroke cases using hospital admission medical records data. (Abstract shortened by UMI.)^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background. Large field studies in travelers' diarrhea (TD) in multiple destinations are limited by the need to perform stool cultures on site in a timely manner. A method for the collection, transport and storage of fecal specimens that does not require immediate processing, refrigeration and is stable for months would be advantageous. ^ Objectives. Determine if enteric pathogen bacterial DNA can be identified in cards routinely used for evaluation of fecal occult blood. ^ Methods. U.S. students traveling to Mexico in 2005-07 were followed for occurrence of diarrheal illness. When ill, students provided a stool specimen for culture and occult blood by the standard method. Cards were then stored at room temperature prior to DNA extraction. A multiplex fecal PCR was performed to identify enterotoxigenic Escherichia coli and enteroaggregative E. coli (EAEC) in DNA extracted from stools and occult blood cards. ^ Results. Significantly more EAEC cases were identified by PCR done in DNA extracted from cards (49%) or from frozen feces (40%) than by culture followed by HEp-2 adherence assays (13%). Similarly more ETEC cases were detected in card DNA (38%) than fecal DNA (30%) or culture followed by hybridization (10%). Sensitivity and specificity of the card test was 75% and 62%, respectively, and 50% and 63%, respectively, when compared to EAEC and ETEC culture, respectively, and 53% and 51%, respectively compared to EAEC multiplex fecal PCR and 56% and 70%, respectively, compared to ETEC multiplex fecal PCR. ^ Conclusions. DNA extracted from fecal cards used for detection of occult blood is of use in detecting enteric pathogens. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to better take advantage of the abundant results from large-scale genomic association studies, investigators are turning to a genetic risk score (GRS) method in order to combine the information from common modest-effect risk alleles into an efficient risk assessment statistic. The statistical properties of these GRSs are poorly understood. As a first step toward a better understanding of GRSs, a systematic analysis of recent investigations using a GRS was undertaken. GRS studies were searched in the areas of coronary heart disease (CHD), cancer, and other common diseases using bibliographic databases and by hand-searching reference lists and journals. Twenty-one independent case-control studies, cohort studies, and simulation studies (12 in CHD, 9 in other diseases) were identified. The underlying statistical assumptions of the GRS using the experience of the Framingham risk score were investigated. Improvements in the construction of a GRS guided by the concept of composite indicators are discussed. The GRS will be a promising risk assessment tool to improve prediction and diagnosis of common diseases.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En países en vías de desarrollo como Argentina, la sobrevida de prematuros de peso inferior a 1000 gramos dista mucho de los resultados reportados por países desarrolladas. Controles prenatales deficitarios, recursos técnicos limitados y la saturación de los servicios de Neonatología son en parte responsables de estas diferencias. Una de las situaciones frecuentemente asociada a decisiones éticas en neonatología se produce en torno al prematuro extremo. Las preguntas más difíciles de responder son si existe un límite de peso o edad gestacional por debajo del cual no se deban iniciar o agregar terapéuticas encaminadas a salvar la vida, por considerarlas inútiles para el niño, prolongan sin esperanza la vida, hacen sufrir al paciente y su familia y ocupar una unidad que priva de atención a otro niño con mayores posibilidades de sobrevida. En el presente estudio se elaboró un score de riesgo neonatal constituido por variables que caracterizan a muchas poblaciones de nuestros países latinoamericanos y que fue validado estadísticamente.El score es de rápida y fácil realización. Permite predecir si el prematuro grave es recuperable o no, posibilitando tomar decisiones éticas basadas en una técnica validada, que permite actuar en el mayor beneficio del niño y su familia, al mismo tiempo que se hace un uso más equitativo de los recursos.