73 resultados para mathematics intervention program
Resumo:
Proteinuria was associated with cardiovascular events and mortality in community-based cohorts. The association of proteinuria with mortality and cardiovascular events in patients undergoing percutaneous coronary intervention (PCI) was unknown. The association of urinary dipstick proteinuria with mortality and cardiovascular events (composite of death, myocardial infarction, or nonhemorrhagic stroke) in 5,835 subjects of the EXCITE trial was evaluated. Dipstick urinalysis was performed before PCI, and proteinuria was defined as trace or greater. Subjects were followed up for 210 days/7 months after enrollment for the occurrence of events. Multivariate Cox regression analysis evaluated the independent association of proteinuria with each outcome. Mean age was 59 years, 21% were women, 18% had diabetes mellitus, and mean estimated glomerular filtration rate was 90 ml/min/1.73 m(2). Proteinuria was present in 750 patients (13%). During follow-up, 22 subjects (2.9%) with proteinuria and 54 subjects (1.1%) without proteinuria died (adjusted hazard ratio 2.83, 95% confidence interval [CI] 1.65 to 4.84, p <0.001). The severity of proteinuria attenuated the strength of the association with mortality after PCI (low-grade proteinuria, hazard ratio 2.67, 95% CI 1.50 to 4.75; high-grade proteinuria, hazard ratio 3.76, 95% CI 1.24 to 11.37). No significant association was present for cardiovascular events during the relatively short follow-up, but high-grade proteinuria tended toward increased risk of cardiovascular events (hazard ratio 1.45, 95% CI 0.81 to 2.61). In conclusion, proteinuria was strongly and independently associated with mortality in patients undergoing PCI. These data suggest that such a relatively simple and clinically easy to use tool as urinary dipstick may be useful to identify and treat patients at high risk of mortality at the time of PCI. (C) 2008 Elsevier Inc. All rights reserved. (Am J Cardiol 2008;102:1151-1155)
Resumo:
Background-Randomized trials that studied clinical outcomes after percutaneous coronary intervention (PCI) with bare metal stenting versus coronary artery bypass grafting (CABG) are underpowered to properly assess safety end points like death, stroke, and myocardial infarction. Pooling data from randomized controlled trials increases the statistical power and allows better assessment of the treatment effect in high-risk subgroups. Methods and Results-We performed a pooled analysis of 3051 patients in 4 randomized trials evaluating the relative safety and efficacy of PCI with stenting and CABG at 5 years for the treatment of multivessel coronary artery disease. The primary end point was the composite end point of death, stroke, or myocardial infarction. The secondary end point was the occurrence of major adverse cardiac and cerebrovascular accidents, death, stroke, myocardial infarction, and repeat revascularization. We tested for heterogeneities in treatment effect in patient subgroups. At 5 years, the cumulative incidence of death, myocardial infarction, and stroke was similar in patients randomized to PCI with stenting versus CABG (16.7% versus 16.9%, respectively; hazard ratio, 1.04, 95% confidence interval, 0.86 to 1.27; P = 0.69). Repeat revascularization, however, occurred significantly more frequently after PCI than CABG (29.0% versus 7.9%, respectively; hazard ratio, 0.23; 95% confidence interval, 0.18 to 0.29; P<0.001). Major adverse cardiac and cerebrovascular events were significantly higher in the PCI than the CABG group (39.2% versus 23.0%, respectively; hazard ratio, 0.53; 95% confidence interval, 0.45 to 0.61; P<0.001). No heterogeneity of treatment effect was found in the subgroups, including diabetic patients and those presenting with 3-vessel disease. Conclusions-In this pooled analysis of 4 randomized trials, PCI with stenting was associated with a long-term safety profile similar to that of CABG. However, as a result of persistently lower repeat revascularization rates in the CABG patients, overall major adverse cardiac and cerebrovascular event rates were significantly lower in the CABG group at 5 years.
Resumo:
Background-The effectiveness of heart failure disease management proarams in patients under cardiologists` care over long-term follow-up is not established. Methods and Results-We investigated the effects of a disease management program with repetitive education and telephone monitoring on primary (combined death or unplanned first hospitalization and quality-of-life changes) and secondary end points (hospitalization, death, and adherence). The REMADHE [Repetitive Education and Monitoring for ADherence for Heart Failure] trial is a long-term randomized, prospective, parallel trial designed to compare intervention with control. One hundred seventeen patients were randomized to usual care, and 233 to additional intervention. The mean follow-up was 2.47 +/- 1.75 years, with 54% adherence to the program. In the intervention group, the primary end point composite of death or unplanned hospitalization was reduced (hazard ratio, 0.64; confidence interval, 0.43 to 0.88; P=0.008), driven by reduction in hospitalization. The quality-of-life questionnaire score improved only in the intervention group (P<0.003). Mortality was similar in both groups. Number of hospitalizations (1.3 +/- 1.7 versus 0.8 +/- 1.3, P<0.0001), total hospital days during the follow-up (19.9 +/- 51 versus 11.1 +/- 24 days, P<0.0001), and the need for emergency visits (4.5 +/- 10.6 versus 1.6 +/- 2.4, P<0.0001) were lower in the intervention group. Beneficial effects were homogeneous for sex, race, diabetes and no diabetes, age, functional class, and etiology. Conclusions-For a longer follow-up period than in previous studies, this heart failure disease management program model of patients under the supervision of a cardiologist is associated with a reduction in unplanned hospitalization, a reduction of total hospital days, and a reduced need for emergency care, as well as improved quality of life, despite modest program adherence over time. (Circ Heart Fail. 2008;1:115-124.)
Resumo:
Introduction. Spontaneous spinal epidural hematoma (SEH) represents 0.3-0.9% of spinal epidural space-occupying lesions, and most surgeons advocate aggressive and early surgical intervention. In. this paper we describe a patient with SEH with sudden paraplegia. Case report. This 30-year-old man had experienced one prior episode of sudden dorsal pain two days before the current admission and while he waited medical attendance, his legs suddenly became weak, and immediately afterwards, he became completely paraplegic in minutes. The patient had complete paraplegia, analgesia below the T4 level and urinary retention. He had no anticoagulant agent and no coagulopathic disease. He was submitted to computerized tomography that demonstrated a dorsally located epidural hematoma extending from the T3 to the T6 level with spinal cord compression. A laminectomy from T3 to T7 was performed four hours after the onset of the symptom. In postoperative time the patient presented the partial sensorial recovery and motor force grade II. The patient was directed to a neurorehabilitation program and in the last medical evaluation he presented recovery for motor grade III-IV without pain. Conclusion. The SHE is rare, with severe neurological consequences for patients and early surgical treatment persist as essential for motor recovery.
Resumo:
The incidence of 21-hydroxylase deficiency (CYP21 D) congenital adrenal hyperplasia (CAH) in Brazil is purportedly one of the highest in the world (1:7,533). However, this information is not based on official data. The aim of this study was to determine the incidence of CYP21 D CAH in the state of Goias, Brazil, based on the 2005 results of government-funded mandatory screening. Of the live births during this period, 92.95% were screened by heel-prick capillary 17 alpha-hydroxyprogesterone (17-OHP). Of these, 82,343 were normal, 28 were at high risk for CAH and 232 at low risk for CAH. Eight cases, all from the high risk group, were confirmed. Eight asymptomatic children at 6-18 months of age still have high 17-OHP levels and await diagnostic definition. Based on the number of confirmed CYP21 D CAH cases among the 82,603 screened, the estimated annual incidence of the disease was 1:10,325, lower than the previously reported rate in Brazil.
Resumo:
Objective: This analysis of the Lipid Treatment Assessment Project 2 population compared lipid goal attainment by diabetes and metabolic syndrome status. Research design and methods: Dyslipidaemic patients aged >= 20 years on stable lipid lowering therapy had their lipid levels determined once during enrolment at investigation sites in nine countries between September 2006 and April 2007. Achievement of low-density lipoprotein (LDL) cholesterol success, triglycerides < 150 mg/dl (1.7 mmol/l), and high-density lipoprotein (HDL) cholesterol success (> 40 mg/dl [1.0 mmol/l] in men or > 50 mg/dl [1.3 mmol/l] in women) was compared using logistic regression. Results: A total of 9955 patients were evaluated. Patients with diabetes, compared with those without diabetes, had lower achievement of LDL cholesterol goals (according to National Cholesterol Education Program Adult Treatment Panel [NCEP ATP] III guidelines; 67% vs. 75%), triglycerides < 150 mg/dl (55% vs. 64%), and HDL cholesterol success (61% vs. 74%; p < 0.0001 for all comparisons). The significantly lower lipid goal attainment in patients with diabetes was consistent across participating world regions. Patients with metabolic syndrome, compared with those without metabolic syndrome, had lower achievement of NCEP ATP III LDL cholesterol goals (69% vs. 76%), triglycerides < 150 mg/dl (36% vs. 83%), and HDL cholesterol success (49% vs. 89%; p < 0.0001 for all comparisons). As the number of metabolic syndrome components increased, lipid success rates progressively decreased (p < 0.0001 for LDL cholesterol success, triglycerides < 150 mg/dl, and HDL cholesterol success). Conclusions: This analysis indicates that despite their increased cardiovascular risk, patients with diabetes or metabolic syndrome remain undertreated.
Resumo:
Objective: Physical and psychological incapacity, including fear of falling is related to decreased satisfaction with life in osteoporosis (OP). The impact of a balance exercise program on improving the quality of life is not well established. We have, therefore, investigated the effect of 12-month Balance Training Program in quality of life, functional balance and falls in elderly OP women. Methods: Sixty consecutive women with senile OP were randomized into a Balance Training Group (BT) of 30 patients and no intervention control group (CG) of 30 patients. The BT program included techniques to improve balance over a period of 12 months (1 h exercise session/week and home-based exercises). The quality of life was evaluated before and at the end of the trial using the Osteoporosis Assessment Questionnaire (OPAQ), functional balance was evaluated by Berg Balance Scale (BBS). Falls in the preceding year were noted and compared to the period of study. Results: The comparison of OPAQ variations (INITIAL-FINAL) revealed a significant improvement in quality of life in all parameters for BT compared to CG: well-being (1.61 +/- 1.44 vs. -1.46 +/- 1.32, p < 0001), physical function (1.30 +/- 1.33 vs. -0.36 +/- 0.82, p < 0.001), psychological status (1.58 +/- 1.36 vs. -1.02 +/- 0.83, p < 0.001), symptoms (2.76 +/- 1.96 vs. -0.63 +/- 0.87, p < 0.001), social interaction (1.01 +/- 1.51 vs. 0.35 +/- 1.08, p < 0.001). Of note, this overall benefit was paralleled by an improvement of BBS (-5.5 +/- 5.67 vs. +0.5 +/- 4.88 p < 0.001) and a reduction of falls in 50% in BT group vs. 26.6% for the CG (RR: 1.88, p < 0.025). Conclusion: The long-term Balance Training Program of OP women provides a striking overall health quality of life improvement in parallel with improving functional balance and reduced falls. (C) 2010 Elsevier Ireland Ltd. All rights reserved.
Resumo:
MENDES, F. A. R., F. M. ALMEIDA, A. CUKIER, R. STELMACH, W. JACOB-FILHO, M. A. MARTINS, and C. R. F. CARVALHO. Effects of Aerobic Training on Airway Inflammation in Asthmatic Patients. Med. Sci. Sports Exerc., Vol. 43, No. 2, pp. 197-203, 2011. Purpose: There is evidence suggesting that physical activity has anti-inflammatory effects in many chronic diseases; however, the role of exercise in airway inflammation in asthma is poorly understood. We aimed to evaluate the effects of an aerobic training program on eosinophil inflammation (primary aim) and nitric oxide (secondary aim) in patients with moderate or severe persistent asthma. Methods: Sixty-eight patients randomly assigned to either control (CG) or aerobic training (TG) groups were studied during the period between medical consultations. Patients in the CG (educational program + breathing exercises; N = 34) and TG (educational program + breathing exercises + aerobic training; N = 34) were examined twice a week during a 3-month period. Before and after the intervention, patients underwent induced sputum, fractional exhaled nitric oxide (FeNO), pulmonary function, and cardiopulmonary exercise testing. Asthma symptom-free days were quantified monthly, and asthma exacerbation was monitored during 3 months of intervention. Results: At 3 months, decreases in the total and eosinophil cell counts in induced sputum (P = 0.004) and in the levels of FeNO (P = 0.009) were observed after intervention only in the TG. The number of asthma symptom-free days and (V) over dotO(2max) also significantly improved (P < 0.001), and lower asthma exacerbation occurred in the TG (P < 0.01). In addition, the TG presented a strong positive relationship between baseline FeNO and eosinophil counts as well as their improvement after training (r = 0.77 and r = 0.9, respectively). Conclusions: Aerobic training reduces sputum eosinophil and FeNO in patients with moderate or severe asthma, and these benefits were more significant in subjects with higher levels of inflammation. These results suggest that aerobic training might be useful as an adjuvant therapy in asthmatic patients under optimized medical treatment.
Resumo:
Background & Aims: EPIC-3 is a prospective, international study that has demonstrated the efficacy of PEG-IFN alfa-2b plus weight-based ribavirin in patients with chronic hepatitis C and significant fibrosis who previously failed any interferon-alfa/ribavirin therapy. The aim of the present study was to assess FibroTest (FT), a validated non-invasive marker of fibrosis in treatment-naive patients, as a possible alternative to biopsy as the baseline predictor of subsequent early virologic (EVR) and sustained virologic response (SVR) in previously treated patients. Methods: Of 2312 patients enrolled, 1459 had an available baseline FT, biopsy, and complete data. Uni- (UV) and multi-variable (MV) analyses were performed using FT and biopsy. Results: Baseline characteristics were similar as in the overall population; METAVIR stage: 28% F2, 29% F3, and 43% F4, previous relapsers 29%, previous PEG-IFN regimen 41%, high baseline viral load (BVL) 64%. 506 patients (35%) had undetectable HCV-RNA at TW12 (TW12neg), with 58% achieving SVR. The accuracy of FT was similar to that in naive patients: AUROC curve for the diagnosis of F4 vs F2 = 0.80 (p<0.00001). Five baseline factors were associated (p<0.001) with SVR in UV and MV analyses (odds ratio: UV/MV): fibrosis stage estimated using FT (4.5/5.9) or biopsy (1.5/1.6), genotype 2/3 (4.5/5.1), BVL (1.5/1.3), prior relapse (1.6/1.6), previous treatment with non-PEG-IFN (2.6/2.0). These same factors were associated (p <= 0.001) with EVR. Among patients TW12neg, two independent factors remained highly predictive of SVR by MV analysis (p <= 0.001): genotype 2/3 (odds ratio = 2.9), fibrosis estimated with FT (4.3) or by biopsy (1.5). Conclusions: FibroTest at baseline is a possible non-invasive alternative to biopsy for the prediction of EVR at 12 weeks and SVR, in patients with previous failures and advanced fibrosis, retreated with PEG-IFN alfa-2b and ribavirin. (C) 2010 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Resumo:
Background/Aims: Safety of laparoscopic colectomy education methods remains unknown. This study aimed at comparing the outcomes of patients undergoing preceptored laparoscopic colectomy with patients operated on by the same preceptor. Methodology: A prospective analysis of 30 preceptored operations performed by nine surgeons (PD group) between 2006 and 2008 was conducted. Data of 30 operations matched for diagnosis and surgery type conducted by the same preceptor (P group) were evaluated. Results: Median age was 56.2 (26-80) and 55.2 (22-81) respectively in P and PD group (p=0.804). Eleven (36.7%) were male in P group, 16 (53.3%) in PD group (p=0.194). Preceptored operations were not significantly longer than operations performed by the preceptor (198 vs. 156 min) - p=0.072. Length of hospital stay did not differ [4 days (3-12) in P group, and 5 (3-15) in PD group, p=0.296]. Conversion occurred in 4 cases in PD and in 2 in P group (p=0.389). Morbidity was similar (23.3% in P and 26.7% in PD group). One patient from P and two from PD group needed re-operation. No deaths occurred. Conclusions: Laparoscopic colorectal surgery preceptorship programs in surgeon learner`s place are safe. Surgeons` introduction through basic and hands-on courses is required for skills acquisition needed to minimize adverse outcomes.
Resumo:
Spleen removal may be recommended during organ transplantation in ABO-incompatible recipients as well as for hypoperfusion of the grafted liver, besides conventional surgical indications, but elevation of serum lipids has been observed in certain contexts. Aiming to analyze the influence of two dietary regimens on lipid profile, an experimental study was conducted. Methods: Male Wistar rats (n = 86, 333.0 +/- 32.2 g) were divided in four groups: group 1: controls; group 2: sham operation; group 3: total splenectomy; group 4: subtotal splenectomy with upper pole preservation; subgroups A (cholesterol reducing chow) and B (cholesterol-rich mixture) were established, and diet was given during 90 days. Total cholesterol (Tchol), high-density lipoprotein (HDL), low-density lipoprotein (LDL), very-low-density lipoprotein (VLDL), and triglycerides were documented. Results: After total splenectomy, hyperlipidemia ensued with cholesterol-reducing chow. Tchol, LDL, VLDL, triglycerides, and HDL changed from 56.4 +/- 9.2, 24.6 +/- 4.7, 9.7 +/- 2.2, 48.6 +/- 11.1, and 22.4 +/- 4.3 mg/dL to 66.9 +/- 11.4, 29.9 +/- 5.9, 10.9 +/- 2.3, 54.3 +/- 11.4, and 26.1 +/- 5.1 mg/dL, respectively. Upper pole preservation inhibited abnormalities of Tchol, HDL, VLDL, and triglycerides, and LDL decreased (23.6 +/- 4.9 vs. 22.1 +/- 5.1, P = 0.002). Higher concentrations were triggered by splenectomy and cholesterol-enriched diet (Tchol 59.4 +/- 10.1 vs. 83.9 +/- 14.3 mg/dL, P = 0.000), and upper-pole preservation diminished without abolishing hyperlipidemia (Tchol 55.9 +/- 10.0 vs. 62.3 +/- 7.8, P = 0.002). Conclusions: After splenectomy, hyperlipidemia occurred with both diets. Preservation of the upper pole tended to correct dyslipidemia in modality A and to attenuate it in subgroup B. (c) 2008 Wiley-Liss, Inc. Microsurgery 29:154-160, 2009.
Resumo:
Background: This study evaluated the impact of 2 models of educational intervention on rates of central venous catheter-associated bloodstream infections (CVC-BSIs). Methods: This was a prospective observational study conducted between January 2005 and June 2007 in 2 medical intensive care units (designated ICU A and ICU B) in a large teaching hospital. The study was divided into in 3 periods: baseline (only rates were evaluated), preintervention (questionnaire to evaluate knowledge of health care workers [HCWs] and observation of CVC care in both ICUs), and intervention (in ICU A, tailored, continuous intervention; in ICU B, a single lecture). The preintervention and intervention periods for each ICU were compared. Results: During the preintervention period, 940 CVC-days were evaluated in ICUA and 843 CVC-days were evaluated in ICU B. During the intervention period, 2175 CVC-days were evaluated in ICUA and 1694 CVC-days were evaluated in ICU B. Questions regarding CVC insertion, disinfection during catheter manipulation, and use of an alcohol-based product during dressing application were answered correctly by 70%-100% HCWs. Nevertheless, HCWs` adherence to these practices in the preintervention period was low for CVC handling and dressing, hand hygiene (6%-35%), and catheter hub disinfection (45%-68%). During the intervention period, HCWs` adherence to hand hygiene was 48%-98%, and adherence to hub disinfection was 82%-97%. CVC-BSI rates declined in both units. In ICUA, this decrease was progressive and sustained, from 12CVC-BSIs/1000 CVC-days at baseline to 0 after 9 months. In ICU B, the rate initially dropped from 16.2 to 0 CVC-BSIs/1000 CVC-days, but then increased to 13.7 CVC-BSIs/1000 CVC-days. Conclusion: Personal customized, continuous intervention seems to develop a ""culture of prevention"" and is more effective than single intervention, leading to a sustained reduction of infection rates.
Resumo:
Liver transplantation was first performed at the University of Sao Paulo School of Medicine in 1968. Since then, the patient waiting list for liver transplantation has increased at a rate of 150 new cases per month. Liver transplantation itself rose 1.84-fold (from 160 to 295) from 1988 to 2004. However, the number of patients on the liver waiting list jumped 2.71-fold (from 553 to 1500). Consequently, the number of deaths on the liver waiting list moved to a higher level, from 321 to 671, increasing 2.09-fold. We have applied a mathematical model to analyze the potential impact of using a donation after cardiac death (DCD) policy on our liver transplantation program and on the waiting list. Five thousand one hundred people died because of accidents and other violent causes in our state in 2004; of these, only 295 were donors of liver grafts that were transplanted. The model assumed that 5% of these grafts would have been DCD. We found a relative reduction of 27% in the size of the liver transplantation waiting list if DCD had been used by assuming that 248 additional liver transplants would have been performed annually. In conclusion, the use of DCD in our transplantation program would reduce the pressure on our liver transplantation waiting list, reducing it by at least 27%. On the basis of this model, the projected number of averted deaths is about 41,487 in the next 20 years. Liver Transpl 14:1732-1736, 2008. (C) 2008 AASLD.
Resumo:
Background: The cerebrospinal fluid (CSF) biomarkers amyloid beta (A beta)-42, total-tau (T-tau), and phosphorylated-tau (P-tau) demonstrate good diagnostic accuracy for Alzheimer`s disease (AD). However, there are large variations in biomarker measurements between studies, and between and within laboratories. The Alzheimer`s Association has initiated a global quality control program to estimate and monitor variability of measurements, quantify batch-to-batch assay variations, and identify sources of variability. In this article, we present the results from the first two rounds of the program. Methods: The program is open for laboratories using commercially available kits for A beta, T-tau, or P-tau. CSF samples (aliquots of pooled CSF) are sent for analysis several times a year from the Clinical Neurochemistry Laboratory at the Molndal campus of the University of Gothenburg, Sweden. Each round consists of three quality control samples. Results: Forty laboratories participated. Twenty-six used INNOTEST enzyme-linked immunosorbent assay kits, 14 used Luminex xMAP with the INNO-BIA AlzBio3 kit (both measure A beta-(1-42), P-tau(181P), and T-tau), and 5 used Mesa Scale Discovery with the A beta triplex (A beta N-42, A beta N-40, and A beta N-38) or T-tau kits. The total coefficients of variation between the laboratories were 13% to 36%. Five laboratories analyzed the samples six times on different occasions. Within-laboratory precisions differed considerably between biomarkers within individual laboratories. Conclusions: Measurements of CSF AD biomarkers show large between-laboratory variability, likely caused by factors related to analytical procedures and the analytical kits. Standardization of laboratory procedures and efforts by kit vendors to increase kit performance might lower variability, and will likely increase the usefulness of CSF AD biomarkers. (C) 2011 The Alzheimer`s Association. All rights reserved.
Resumo:
There is a positive correlation between the intensity of use of a given antibiotic and the prevalence of resistant strains. The more you treat, more patients infected with resistant strains appears and, as a consequence, the higher the mortality due to the infection and the longer the hospitalization time. In contrast, the less you treat, the higher the mortality rates and the longer the hospitalization time of patients infected with sensitive strains that could be successfully treated. The hypothesis proposed in this paper is an attempt to solve such a conflict: there must be an optimum treatment intensity that minimizes both the additional mortality and hospitalization time due to the infection by both sensitive and resistant bacteria strains. In order to test this hypothesis we applied a simple mathematical model that allowed us to estimate the optimum proportion of patients to be treated in order to minimize the total number of deaths and hospitalization time due to the infection in a hospital setting. (C) 2007 Elsevier Inc. All rights reserved.