752 resultados para Outcome Assessment (health Care)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Partial nephrectomy for small kidney tumors has increased in the last decades, and the approach to non-palpable endophytic tumors became a challenge, with larger chances of positive margins or complications. The aim of this study is to describe an alternative nephron-sparing approach for small endophytic kidney tumors through anatrophic nephrotomy. Patients and Methods: A retrospective analysis of patients undergoing partial nephrectomy at our institution was performed and the subjects with endophytic tumors treated with anatrophic nephrotomy were identified. Patient demographics, perioperative outcomes and oncological results were evaluated. Results: Among the partial nephrectomies performed for intraparenchymal tumors between 06/2006 and 06/2010, ten patients were submitted to anatrophic nephrotomy. The mean patient age was 42 yrs, and the mean tumor size was 2.3 cm. Mean warm ischemia time was 22.4 min and the histopathological analysis showed 80% of clear cell carcinomas. At a mean follow-up of 36 months, no significant creatinine changes or local or systemic recurrences were observed. Conclusion: The operative technique described is a safe and effective nephron-sparing option for complete removal of endophytic renal tumors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Pediatric truncal vascular injuries occur infrequently and have a reported mortality rate of 30% to 50%. This report examines the demographics, mechanisms of injury, associated trauma, and outcome of patients presenting for the past 10 years at a single institution with truncal vascular injuries. METHODS: A retrospective review (1997-2006) of a pediatric trauma registry at a single institution was undertaken. RESULTS: Seventy-five truncal vascular injuries occurred in 57 patients (age, 12 +/- 3 years); the injury mechanisms were penetrating in 37%. Concomitant injuries occurred with 76%, 62%, and 43% of abdominal, thoracic, and neck vascular injuries, respectively. Nonvascular complications occurred more frequently in patients with abdominal vascular injuries who were hemodynamically unstable on presentation. All patients with thoracic vascular injuries presenting with hemodynamic instability died. In patients with neck vascular injuries, 1 of 2 patients who were hemodynamically unstable died, compared to 1 of 12 patients who died in those who presented hemodynamically stable. Overall survival was 75%. CONCLUSIONS: Survival and complications of pediatric truncal vascular injury are related to hemodynamic status at the time of presentation. Associated injuries are higher with trauma involving the abdomen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: We sought to determine maternal and neonatal outcomes by labor onset type and gestational age. STUDY DESIGN: We used electronic medical records data from 10 US institutions in the Consortium on Safe Labor on 115,528 deliveries from 2002 through 2008. Deliveries were divided by labor onset type (spontaneous, elective induction, indicated induction, unlabored cesarean). Neonatal and maternal outcomes were calculated by labor onset type and gestational age. RESULTS: Neonatal intensive care unit admissions and sepsis improved with each week of gestational age until 39 weeks (P < .001). After adjusting for complications, elective induction of labor was associated with a lower risk of ventilator use (odds ratio [OR], 0.38; 95% confidence interval [CI], 0.28-0.53), sepsis (OR, 0.36; 95% CI, 0.26-0.49), and neonatal intensive care unit admissions (OR, 0.52; 95% CI, 0.48-0.57) compared to spontaneous labor. The relative risk of hysterectomy at term was 3.21 (95% CI, 1.08-9.54) with elective induction, 1.16 (95% CI, 0.24-5.58) with indicated induction, and 6.57 (95% CI, 1.78-24.30) with cesarean without labor compared to spontaneous labor. CONCLUSION: Some neonatal outcomes improved until 39 weeks. Babies born with elective induction are associated with better neonatal outcomes compared to spontaneous labor. Elective induction may be associated with an increased hysterectomy risk.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objetivos: Avaliar a capacidade funcional de pacientes vítimas de trauma um ano após alta hospitalar e verificar associação da capacidade funcional com fatores relacionados ao trauma e à internação hospitalar. Metodologia: Estudo de coorte prospectivo, com pacientes vítimas de trauma grave (Injury Severity Score - ISS >=16), internados entre Junho e Setembro de 2010 em unidade de terapia intensiva (UTI) cirúrgica especializada em paciente politraumatizado de um hospital público de grande porte na cidade de São Paulo, Brasil. Variáveis de interesse como idade, sexo, escore de Glasgow, Acute Physiology and Chronic Health Disease Classification System II (APACHE II), mecanismos de trauma, número de lesões, região corpórea afetada, número de cirurgias, duração da ventilação mecânica (VM) e tempo de internação hospitalar foram coletadas dos prontuários médicos. A capacidade funcional foi avaliada um ano após alta hospitalar utilizando as escalas Glasgow Outcome Scale (GOS) e Escala de Atividades Instrumentais de Vida Diária de Lawton (AIVDL). Os pacientes também foram questionados se haviam retornado ao trabalho ou estudo. Resultados: O seguimento um ano após trauma foi completo em 49 indivíduos, a maioria composta por jovens (36±11 anos), do sexo masculino (81,6%) e vítimas de acidentes de trânsito (71,5%). Cada indivíduo sofreu aproximadamente 4 lesões corporais, acarretando uma média no ISS de 31 ± 14,4. O traumatismo cranioencefálico foi o tipo de lesão mais comum (65,3%). De acordo com a GOS, a maioria dos pacientes apresentou disfunção moderada (43%) ou disfunção leve ou ausente (37%) um ano após o trauma. A escala AIVDL apresentou pontuação média de 12±4 com aproximadamente 60- 70% dos indivíduos capazes de realizar de forma independente a maioria das atividades avaliadas. Escore de Glasgow, APACHE II, duração da VM e tempo de internação hospitalar foram associadas com a capacidade funcional um ano após lesão. A regressão linear múltipla considerando todas as variáveis significativas revelou associação entre a pontuação da escala AIVDL e o tempo de internação hospitalar. Apenas 32,6% dos indivíduos retornaram ao trabalho ou estudo. Conclusões: A maioria dos pacientes vítimas de trauma grave foi capaz de realizar as atividades avaliadas com independência; apenas um terço deles retornou ao trabalho e/ou estudo um ano após alta hospitalar. O tempo de internação hospitalar foi revelado como preditor significativo para a recuperação da capacidade funcional um ano após lesão grave

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJETIVO: Avaliar a responsividade da escala de avaliação funcional para pacientes com distrofia muscular de Duchenne (FES-DMD-D4), sentar e levantar do solo, no período de um ano. MÉTODO: Estudo observacional, longitudinal e retrospectivo. Foi estudada, utilizando o software FES-DMDDATA, uma amostra com 25 pacientes na atividade sentar no solo e 28 pacientes para a atividade levantar do solo. As avaliações ocorreram a cada três meses no período de um ano. Para análise estatística da capacidade de resposta foram utilizados índices de tamanho de efeito, como, effect size (ES) e Standardized Response Mean (SRM). RESULTADOS: A responsividade da atividade de sentar no solo foi considerada baixa a moderada em intervalos de três meses (ES de 0.28 a 0.54 e SRM de 0.38 a 0.71), moderada a alta em intervalos de seis meses (ES de 0.69 a 1.07 e SRM de 0.86 a 1.19), alta em intervalos de nove meses (ES de 1.3 a 1.17 e SRM de 1.26 a 1.55) e doze meses (ES de 1.9 e SRM de 1.72). Na atividade levantar do solo, a responsividade variou em baixa, moderada e alta em intervalos de três meses (ES de 0.21 a 0.33 e SRM de 0.45 a 0.83), baixa a alta em intervalos de seis meses (ES de 0.46 a 0.59 e SRM de 0.73 a 0.97), moderada a alta em intervalos de nove meses (ES de 0.76 a 0.88 e SRM de 1.03 a 1.22) e alta em doze meses (ES de 1.14 e SRM de 1.25). CONCLUSÃO: Para detectar alterações clinicamente significativas e consistentes nas atividades funcionais sentar e levantar do solo recomendamos a utilização da FES-DMD-D4 em intervalos a partir de seis meses, pois foi neste período de tempo que a capacidade de resposta variou de moderada a alta

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Study objectives: Smoking cessation for current smokers is a health-care imperative. It is not clear which approaches to smoking cessation are the most effective in the hospital setting and which factors predict long-term abstinence. We hypothesized that a hospital-based smoking cessation program involving behavioral modification and support would provide an effective intervention for smoking cessation. Design: Prospective cohort study. Setting: Smoking cessation clinics in a tertiary referral, cardiothoracic hospital. Patients or participants: Two hundred forty-three smokers and 187 never-smoker control subjects. Interventions: Smokers underwent specific sessions of individual counseling on behavioral modification, including written information, advice about quit aids, and support during the quit attempt. Abstinence was confirmed by exhaled carbon monoxide measurements. Measurements and results: Compared to never-smoker control subjects, smokers were more likely to have grown up with a smoking father or siblings, and to currently live or socialize with other smokers. Two hundred sixteen smokers attended at least two sessions of the smoking cessation program. Of these, 25% were unavailable for follow-up at 12 months and were assumed to be smoking. The point prevalence abstinence rate at 12 months was 32%. Independent factors associated with abstinence at 12 months were self-belief in quitting ability, having a heart condition, growing up without siblings who smoked, and increasing number of pack-years. Conclusions: This prospective study has demonstrated that this hospital-based smoking cessation program was as effective as programs in other settings. Social and psychological factors were associated with a greater chance of abstinence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: A high level of adherence is required to achieve the desired outcomes of antiretroviral therapy. There is paucity of information about adherence to combined antiretroviral therapy in Bayelsa State of southern Nigeria. Objectives: The objectives of the study were to determine the level of adherence to combined antiretroviral therapy among the patients, evaluate the improvement in their immune status and identify reasons for sub-optimal adherence to therapy. Methods: The cross-sectional study involved administration of an adapted and pretested questionnaire to 601 consented patients attending the two tertiary health institutions in Bayesla State, Nigeria: The Federal Medical Centre, Yenagoa and the Niger-Delta University Teaching Hospital Okolobiri. The tool was divided into various sections such as socio-demographic data, HIV knowledge and adherence to combined antiretroviral therapy. Information on the patient's CD4+ T cells count was retrieved from their medical records. Adherence was assessed by asking patients to recall their intake of prescribed doses in the last fourteen days and subjects who had 95-100% of the prescribed antiretroviral drugs were considered adherent. Results: Three hundred and forty eight (57.9%) of the subjects were females and 253 (42.1%) were males. The majority of them, 557 (92.7%) have good knowledge of HIV and combined anti-retroviral therapy with a score of 70.0% and above. A larger proportion of the respondents, 441 (73.4%), had > 95% adherence. Some of the most important reasons giving for missing doses include, “simply forgot” 147 (24.5%), and “wanted to avoid the side-effects of drugs” 33(5.5%). There were remarkable improvements in the immune status of the subjects with an increment in the proportion of the subjects with CD4+ T cells count of greater than 350 cells/mm3 from 33 (5.5%) at therapy initiation to 338 (56.3%) at study period (p<0.0001). Conclusion: The adherence level of 73.4% was low which calls for intervention and improvement. The combined antiretroviral therapy has significantly improved the immune status of the majority of patients which must be sustained. “Simply forgot” was the most important reason for missing doses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: The aims of this study were 1) to identify and describe health economic studies that have used quality-adjusted life years (QALYs) based on actual measurements of patients' health-related quality of life (HRQoL); 2) to test the feasibility of routine collection of health-related quality of life (HRQoL) data as an indicator of effectiveness of secondary health care; and 3) to establish and compare the cost-utility of three large-volume surgical procedures in a real-world setting in the Helsinki University Central Hospital, a large referral hospital providing secondary and tertiary health-care services for a population of approximately 1.4 million. Patients and methods: So as to identify studies that have used QALYs as an outcome measure, a systematic search of the literature was performed using the Medline, Embase, CINAHL, SCI and Cochrane Library electronic databases. Initial screening of the identified articles involved two reviewers independently reading the abstracts; the full-text articles were also evaluated independently by two reviewers, with a third reviewer used in cases where the two reviewers could not agree a consensus on which articles should be included. The feasibility of routinely evaluating the cost-effectiveness of secondary health care was tested by setting up a system for collecting HRQoL data on approximately 4 900 patients' HRQoL before and after operative treatments performed in the hospital. The HRQoL data used as an indicator of treatment effectiveness was combined with diagnostic and financial indicators routinely collected in the hospital. To compare the cost-effectiveness of three surgical interventions, 712 patients admitted for routine operative treatment completed the 15D HRQoL questionnaire before and also 3-12 months after the operation. QALYs were calculated using the obtained utility data and expected remaining life years of the patients. Direct hospital costs were obtained from the clinical patient administration database of the hospital and a cost-utility analysis was performed from the perspective of the provider of secondary health care services. Main results: The systematic review (Study I) showed that although QALYs gained are considered an important measure of the effectiveness of health care, the number of studies in which QALYs are based on actual measurements of patients' HRQoL is still fairly limited. Of the reviewed full-text articles, only 70 reported QALYs based on actual before after measurements using a valid HRQoL instrument. Collection of simple cost-effectiveness data in secondary health care is feasible and could easily be expanded and performed on a routine basis (Study II). It allows meaningful comparisons between various treatments and provides a means for allocating limited health care resources. The cost per QALY gained was 2 770 for cervical operations and 1 740 for lumbar operations. In cases where surgery was delayed the cost per QALY was doubled (Study III). The cost per QALY ranges between subgroups in cataract surgery (Study IV). The cost per QALY gained was 5 130 for patients having both eyes operated on and 8 210 for patients with only one eye operated on during the 6-month follow-up. In patients whose first eye had been operated on previous to the study period, the mean HRQoL deteriorated after surgery, thus precluding the establishment of the cost per QALY. In arthroplasty patients (Study V) the mean cost per QALY gained in a one-year period was 6 710 for primary hip replacement, 52 270 for revision hip replacement, and 14 000 for primary knee replacement. Conclusions: Although the importance of cost-utility analyses has during recent years been stressed, there are only a limited number of studies in which the evaluation is based on patients own assessment of the treatment effectiveness. Most of the cost-effectiveness and cost-utility analyses are based on modeling that employs expert opinion regarding the outcome of treatment, not on patient-derived assessments. Routine collection of effectiveness information from patients entering treatment in secondary health care turned out to be easy enough and did not, for instance, require additional personnel on the wards in which the study was executed. The mean patient response rate was more than 70 %, suggesting that patients were happy to participate and appreciated the fact that the hospital showed an interest in their well-being even after the actual treatment episode had ended. Spinal surgery leads to a statistically significant and clinically important improvement in HRQoL. The cost per QALY gained was reasonable, at less than half of that observed for instance for hip replacement surgery. However, prolonged waiting for an operation approximately doubled the cost per QALY gained from the surgical intervention. The mean utility gain following routine cataract surgery in a real world setting was relatively small and confined mostly to patients who had had both eyes operated on. The cost of cataract surgery per QALY gained was higher than previously reported and was associated with considerable degree of uncertainty. Hip and knee replacement both improve HRQoL. The cost per QALY gained from knee replacement is two-fold compared to hip replacement. Cost-utility results from the three studied specialties showed that there is great variation in the cost-utility of surgical interventions performed in a real-world setting even when only common, widely accepted interventions are considered. However, the cost per QALY of all the studied interventions, except for revision hip arthroplasty, was well below 50 000, this figure being sometimes cited in the literature as a threshold level for the cost-effectiveness of an intervention. Based on the present study it may be concluded that routine evaluation of the cost-utility of secondary health care is feasible and produces information essential for a rational and balanced allocation of scarce health care resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: There is growing interest in the potential utility of real-time polymerase chain reaction (PCR) in diagnosing bloodstream infection by detecting pathogen deoxyribonucleic acid (DNA) in blood samples within a few hours. SeptiFast (Roche Diagnostics GmBH, Mannheim, Germany) is a multipathogen probe-based system targeting ribosomal DNA sequences of bacteria and fungi. It detects and identifies the commonest pathogens causing bloodstream infection. As background to this study, we report a systematic review of Phase III diagnostic accuracy studies of SeptiFast, which reveals uncertainty about its likely clinical utility based on widespread evidence of deficiencies in study design and reporting with a high risk of bias. 

Objective: Determine the accuracy of SeptiFast real-time PCR for the detection of health-care-associated bloodstream infection, against standard microbiological culture. 

Design: Prospective multicentre Phase III clinical diagnostic accuracy study using the standards for the reporting of diagnostic accuracy studies criteria. 

Setting: Critical care departments within NHS hospitals in the north-west of England. 

Participants: Adult patients requiring blood culture (BC) when developing new signs of systemic inflammation. 

Main outcome measures: SeptiFast real-time PCR results at species/genus level compared with microbiological culture in association with independent adjudication of infection. Metrics of diagnostic accuracy were derived including sensitivity, specificity, likelihood ratios and predictive values, with their 95% confidence intervals (CIs). Latent class analysis was used to explore the diagnostic performance of culture as a reference standard. 

Results: Of 1006 new patient episodes of systemic inflammation in 853 patients, 922 (92%) met the inclusion criteria and provided sufficient information for analysis. Index test assay failure occurred on 69 (7%) occasions. Adult patients had been exposed to a median of 8 days (interquartile range 4–16 days) of hospital care, had high levels of organ support activities and recent antibiotic exposure. SeptiFast real-time PCR, when compared with culture-proven bloodstream infection at species/genus level, had better specificity (85.8%, 95% CI 83.3% to 88.1%) than sensitivity (50%, 95% CI 39.1% to 60.8%). When compared with pooled diagnostic metrics derived from our systematic review, our clinical study revealed lower test accuracy of SeptiFast real-time PCR, mainly as a result of low diagnostic sensitivity. There was a low prevalence of BC-proven pathogens in these patients (9.2%, 95% CI 7.4% to 11.2%) such that the post-test probabilities of both a positive (26.3%, 95% CI 19.8% to 33.7%) and a negative SeptiFast test (5.6%, 95% CI 4.1% to 7.4%) indicate the potential limitations of this technology in the diagnosis of bloodstream infection. However, latent class analysis indicates that BC has a low sensitivity, questioning its relevance as a reference test in this setting. Using this analysis approach, the sensitivity of the SeptiFast test was low but also appeared significantly better than BC. Blood samples identified as positive by either culture or SeptiFast real-time PCR were associated with a high probability (> 95%) of infection, indicating higher diagnostic rule-in utility than was apparent using conventional analyses of diagnostic accuracy. 

Conclusion: SeptiFast real-time PCR on blood samples may have rapid rule-in utility for the diagnosis of health-care-associated bloodstream infection but the lack of sensitivity is a significant limiting factor. Innovations aimed at improved diagnostic sensitivity of real-time PCR in this setting are urgently required. Future work recommendations include technology developments to improve the efficiency of pathogen DNA extraction and the capacity to detect a much broader range of pathogens and drug resistance genes and the application of new statistical approaches able to more reliably assess test performance in situation where the reference standard (e.g. blood culture in the setting of high antimicrobial use) is prone to error.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Even though Swedish national guidelines for stroke care (SNGSC) have been accessible for nearly a decade access to stroke rehabilitation in out-patient health care vary considerably. In order to aid future interventions studies for implementation of SNGSC, this study assessed the feasibility and acceptability of study procedures including analysis of the context in out-patient health care settings. METHODS: The feasibility and acceptability of recruitment, observations and interviews with managers, staff and patients were assessed, as well as the feasibility of surveying health care records. RESULTS: To identify patients from the the hospitals was feasible but not from out-patient care where a need to relieve clinical staff of the recruitment process was identified. Assessing adherence to guidelines and standardized evaluations of patient outcomes through health care records was found to be feasible and suitable assessment tools to evaluate patient outcome were identified. Interviews were found to be a feasible and acceptable tool to survey the context of the health care setting. CONCLUSION: In this feasibility study a variety of qualitative and quantitative data collection procedures and measures were tested. The results indicate what can be used as a set of feasible and acceptable data collection procedures and suitable measures for studying implementation of stroke guidelines in an out-patient health care context.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: The aim of this study was to assess the epidemiological and operational characteristics of the Leprosy Program before and after its integration into the Primary Healthcare Services of the municipality of Aracaju-Sergipe, Brazil. Methods: Data were drawn from the national database. The study periods were divided into preintegration (1996-2000) and postintegration (2001-2007). Annual rates of epidemiological detection were calculated. Frequency data on clinico-epidemiological variables of cases detected and treated for the two periods were compared using the Chi-squared (chi(2)) test adopting a 5% level of significance. Results: Rates of detection overall, and in subjects younger than 15 years, were greater for the postintegration period and were higher than rates recorded for Brazil as a whole during the same periods. A total of 780 and 1,469 cases were registered during the preintegration and postintegration periods, respectively. Observations for the postintegration period were as follows: I) a higher proportion of cases with disability grade assessed at diagnosis, with increase of 60.9% to 78.8% (p < 0.001), and at end of treatment, from 41.4% to 44.4% (p < 0.023); II) an increase in proportion of cases detected by contact examination, from 2.1% to 4.1% (p < 0.001); and III) a lower level of treatment default with a decrease from 5.64 to 3.35 (p < 0.008). Only 34% of cases registered from 2001 to 2007 were examined. Conclusions: The shift observed in rates of detection overall, and in subjects younger than 15 years, during the postintegration period indicate an increased level of health care access. The fall in number of patients abandoning treatment indicates greater adherence to treatment. However, previous shortcomings in key actions, pivotal to attaining the outcomes and impact envisaged for the program, persisted in the postintegration period.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: The aim of this study was to assess the epidemiological and operational characteristics of the Leprosy Program before and after its integration into the Primary healthcare Services of the municipality of Aracaju-Sergipe, Brazil. METHODS: Data were drawn from the national database. The study periods were divided into preintegration (1996-2000) and postintegration (2001-2007). Annual rates of epidemiological detection were calculated. Frequency data on clinico-epidemiological variables of cases detected and treated for the two periods were compared using the Chi-squared (χ2) test adopting a 5% level of significance. RESULTS: Rates of detection overall, and in subjects younger than 15 years, were greater for the postintegration period and were higher than rates recorded for Brazil as a whole during the same periods. A total of 780 and 1,469 cases were registered during the preintegration and postintegration periods, respectively. Observations for the postintegration period were as follows: I) a higher proportion of cases with disability grade assessed at diagnosis, with increase of 60.9% to 78.8% (p < 0.001), and at end of treatment, from 41.4% to 44.4% (p < 0.023); II) an increase in proportion of cases detected by contact examination, from 2.1% to 4.1% (p < 0.001); and III) a lower level of treatment default with a decrease from 5.64 to 3.35 (p < 0.008). Only 34% of cases registered from 2001 to 2007 were examined. CONCLUSIONS: The shift observed in rates of detection overall, and in subjects younger than 15 years, during the postintegration period indicate an increased level of health care access. The fall in number of patients abandoning treatment indicates greater adherence to treatment. However, previous shortcomings in key actions, pivotal to attaining the outcomes and impact envisaged for the program, persisted in the postintegration period.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: There is little evidence on differences across health care systems in choice and outcome of the treatment of chronic low back pain (CLBP) with spinal surgery and conservative treatment as the main options. At least six randomised controlled trials comparing these two options have been performed; they show conflicting results without clear-cut evidence for superior effectiveness of any of the evaluated interventions and could not address whether treatment effect varied across patient subgroups. Cost-utility analyses display inconsistent results when comparing surgical and conservative treatment of CLBP. Due to its higher feasibility, we chose to conduct a prospective observational cohort study. METHODS: This study aims to examine if1. Differences across health care systems result in different treatment outcomes of surgical and conservative treatment of CLBP2. Patient characteristics (work-related, psychological factors, etc.) and co-interventions (physiotherapy, cognitive behavioural therapy, return-to-work programs, etc.) modify the outcome of treatment for CLBP3. Cost-utility in terms of quality-adjusted life years differs between surgical and conservative treatment of CLBP.This study will recruit 1000 patients from orthopaedic spine units, rehabilitation centres, and pain clinics in Switzerland and New Zealand. Effectiveness will be measured by the Oswestry Disability Index (ODI) at baseline and after six months. The change in ODI will be the primary endpoint of this study.Multiple linear regression models will be used, with the change in ODI from baseline to six months as the dependent variable and the type of health care system, type of treatment, patient characteristics, and co-interventions as independent variables. Interactions will be incorporated between type of treatment and different co-interventions and patient characteristics. Cost-utility will be measured with an index based on EQol-5D in combination with cost data. CONCLUSION: This study will provide evidence if differences across health care systems in the outcome of treatment of CLBP exist. It will classify patients with CLBP into different clinical subgroups and help to identify specific target groups who might benefit from specific surgical or conservative interventions. Furthermore, cost-utility differences will be identified for different groups of patients with CLBP. Main results of this study should be replicated in future studies on CLBP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION Patients admitted to intensive care following surgery for faecal peritonitis present particular challenges in terms of clinical management and risk assessment. Collaborating surgical and intensive care teams need shared perspectives on prognosis. We aimed to determine the relationship between dynamic assessment of trends in selected variables and outcomes. METHODS We analysed trends in physiological and laboratory variables during the first week of intensive care unit (ICU) stay in 977 patients at 102 centres across 16 European countries. The primary outcome was 6-month mortality. Secondary endpoints were ICU, hospital and 28-day mortality. For each trend, Cox proportional hazards (PH) regression analyses, adjusted for age and sex, were performed for each endpoint. RESULTS Trends over the first 7 days of the ICU stay independently associated with 6-month mortality were worsening thrombocytopaenia (mortality: hazard ratio (HR) = 1.02; 95% confidence interval (CI), 1.01 to 1.03; P <0.001) and renal function (total daily urine output: HR =1.02; 95% CI, 1.01 to 1.03; P <0.001; Sequential Organ Failure Assessment (SOFA) renal subscore: HR = 0.87; 95% CI, 0.75 to 0.99; P = 0.047), maximum bilirubin level (HR = 0.99; 95% CI, 0.99 to 0.99; P = 0.02) and Glasgow Coma Scale (GCS) SOFA subscore (HR = 0.81; 95% CI, 0.68 to 0.98; P = 0.028). Changes in renal function (total daily urine output and renal component of the SOFA score), GCS component of the SOFA score, total SOFA score and worsening thrombocytopaenia were also independently associated with secondary outcomes (ICU, hospital and 28-day mortality). We detected the same pattern when we analysed trends on days 2, 3 and 5. Dynamic trends in all other measured laboratory and physiological variables, and in radiological findings, changes inrespiratory support, renal replacement therapy and inotrope and/or vasopressor requirements failed to be retained as independently associated with outcome in multivariate analysis. CONCLUSIONS Only deterioration in renal function, thrombocytopaenia and SOFA score over the first 2, 3, 5 and 7 days of the ICU stay were consistently associated with mortality at all endpoints. These findings may help to inform clinical decision making in patients with this common cause of critical illness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. Previous studies show emergency rooms to be over crowded nation wide. With growing attention to this problem, the Houston-Galveston Area Council (H-GAC) initiated a study in 2005 to assess their region's emergency health care system, and continued this effort in 2007. The purpose of this study was to examine recent changes in volume, capacity and performance in the Houston-Galveston region's emergency health care system and determine whether the system has been able to effectively respond to the residents' demands. Methods. Data were collected by the Houston-Galveston Area Council and The Abaris Group using a self-administered 2002-2006 survey completed by administrators of the region's hospitals, EMS providers, and select fire departments that provide EMS services. Data from both studies were combined and matched to examine trends. Results. Volume increased among the reporting hospitals within the Houston-Galveston region from 2002 to 2006; however, capacity remained relatively unchanged. EMS providers reported higher average off load times in 2007 compared to 2005, but the increases were not statistically significant. Hospitals reported transferring a statistically significant greater percentage of patients in 2006 than 2004. There was no statistically significant change in any of the other measures. Conclusion. These findings indicate an increase in demand for the Houston-Galveston region's emergency healthcare services with no change in supply. Additional studies within the area are needed to fully assess and evaluate the impact of these changes on system performance. ^