49 resultados para programming education
Resumo:
BACKGROUND: Studies have shown that human immunodeficiency virus (HIV) residual risk is higher in Brazilian than in US and European blood donors, probably due to failure to defer at-risk individuals in Brazil. This study assessed the impact of an educational brochure in enhancing blood donors` knowledge about screening test window phase and reducing at-risk individuals from donating. STUDY DESIGN AND METHODS: This trial compared an educational intervention with a blood center`s usual practice. The brochure was distributed in alternating months to all donors. After donating, sampled participants completed two questions about their HIV window period knowledge. The impact on HIV risk deferral, leaving without donation, confidential unit exclusion (CUE) use, and test positivity was also analyzed. RESULTS: From August to November 2007 we evaluated 33,940 donations in the main collection center of Fundacao Pro-Sangue/Hemocentro de Sao Paulo in Sao Paulo, Brazil. A significant (p < 0.001) pamphlet effect was found on correct responses to both questions assessing HIV window phase knowledge (68.1% vs. 52.9%) and transfusion risk (91.1% vs. 87.2%). After adjusting for sex and age, the pamphlet effect was strongest for people with more than 8 years of education. There was no significant pamphlet effect on HIV risk deferral rate, leaving without donation, use of CUE, or infectious disease rates. CONCLUSION: While the educational pamphlet increased window period knowledge, contrary to expectations this information alone was not enough to make donors self-defer or acknowledge their behavioral risk.
Resumo:
Background-The effectiveness of heart failure disease management proarams in patients under cardiologists` care over long-term follow-up is not established. Methods and Results-We investigated the effects of a disease management program with repetitive education and telephone monitoring on primary (combined death or unplanned first hospitalization and quality-of-life changes) and secondary end points (hospitalization, death, and adherence). The REMADHE [Repetitive Education and Monitoring for ADherence for Heart Failure] trial is a long-term randomized, prospective, parallel trial designed to compare intervention with control. One hundred seventeen patients were randomized to usual care, and 233 to additional intervention. The mean follow-up was 2.47 +/- 1.75 years, with 54% adherence to the program. In the intervention group, the primary end point composite of death or unplanned hospitalization was reduced (hazard ratio, 0.64; confidence interval, 0.43 to 0.88; P=0.008), driven by reduction in hospitalization. The quality-of-life questionnaire score improved only in the intervention group (P<0.003). Mortality was similar in both groups. Number of hospitalizations (1.3 +/- 1.7 versus 0.8 +/- 1.3, P<0.0001), total hospital days during the follow-up (19.9 +/- 51 versus 11.1 +/- 24 days, P<0.0001), and the need for emergency visits (4.5 +/- 10.6 versus 1.6 +/- 2.4, P<0.0001) were lower in the intervention group. Beneficial effects were homogeneous for sex, race, diabetes and no diabetes, age, functional class, and etiology. Conclusions-For a longer follow-up period than in previous studies, this heart failure disease management program model of patients under the supervision of a cardiologist is associated with a reduction in unplanned hospitalization, a reduction of total hospital days, and a reduced need for emergency care, as well as improved quality of life, despite modest program adherence over time. (Circ Heart Fail. 2008;1:115-124.)
Resumo:
Aims Trials of disease management programmes (DMP) in heart failure (HF) have shown controversial results regarding quality of life. We hypothesized that a DMP applied over the long-term could produce different effects on each of the quality-of-life components. Methods and results We extended the prospective, randomized REMADHE Trial, which studied a DMP in HF patients. We analysed changes in Minnesota Living with Heart Failure Questionnaire components in 412 patients, 60.5% male, age 50.2 +/- 11.4 years, left ventricular ejection fraction 34.7 +/- 10.5%. During a mean follow-up of 3.6 +/- 2.2 years, 6.3% of patients underwent heart transplantation and 31.8% died. Global quality-of-life scores improved in the DMP intervention group, compared with controls, respectively: 57.5 +/- 3.1 vs. 52.6 +/- 4.3 at baseline, 32.7 +/- 3.9 vs. 40.2 +/- 6.3 at 6 months, 31.9 +/- 4.3 vs. 41.5 +/- 7.4 at 12 months, 26.8 +/- 3.1 vs. 47.0 +/- 5.3 at the final assessment; P<0.01. Similarly, the physical component (23.7 +/- 1.4 vs. 21.1 +/- 2.2 at baseline, 16.2 +/- 2.9 vs. 18.0 +/- 3.3 at 6 months, 17.3 +/- 2.9 vs. 23.1 +/- 5.7 at 12 months, 11.4 +/- 1.6 vs. 19.9 +/- 2.4 final; P<0.01), the emotional component (13.2 +/- 1.0 vs. 12.1 +/- 1.4 at baseline, 11.7 +/- 2.7 vs. 12.3 +/- 3.1 at 6 months, 12.4 +/- 2.9 vs. 16.8 +/- 5.9 at 12 months, 6.7 +/- 1.0 vs. 10.6 +/- 1.4 final; P<0.01) and the additional questions (20.8 +/- 1.2 vs. 19.3 +/- 1.8 at baseline, 14.3 +/- 2.7 vs. 17.3 +/- 3.1 at 6 months, 12.4 +/- 2.9 vs. 21.0 +/- 5.5 at 12 months, 6.7 +/- 1.4 vs. 17.3 +/- 2.2 final; P<0.01) were better (lower) in the intervention group. The emotional component improved earlier than the others. Post-randomization quality of life was not associated with events. Conclusion Components of the quality-of-life assessment responded differently to DMP. These results indicate the need for individualized DMP strategies in patients with HF. Trial registration information www.clincaltrials.gov NCT00505050-REMADHE.
Resumo:
Background Heart failure and diabetes often occur simultaneously in patients, but the prognostic value of glycemia in chronic heart failure is debatable. We evaluated the role of glycemia on prognosis of heart failure. Methods Outpatients with chronic heart failure from the Long-term Prospective Randomized Controlled Study Using Repetitive Education at Six-Month Intervals and Monitoring for Adherence in Heart Failure Outpatients (REMADHE) trial were grouped according to the presence of diabetes and level of glycemia. All-cause mortality/heart transplantation and unplanned hospital admission were evaluated. Results Four hundred fifty-six patients were included (135 [29.5%] female, 124 [27.2%] with diabetes mellitus, age of w50.2 +/- 11.4 years, and left-ventricle ejection fraction of 34.7% +/- 10.5%). During follow-up (3.6 +/- 2.2 years), 27 (5.9%) patients were submitted to heart transplantation and 202 (44.2%) died; survival was similar in patients with and without diabetes mellitus. When patients with and without diabetes were categorized according to glucose range (glycemia <= 100 mg/dL [5.5 mmol/L]), as well as when distributed in quintiles of glucose, the survival was significantly worse among patients with lower levels of glycemia. This finding persisted in Cox proportional hazards regression model that included gender, etiology, left ventricle ejection fraction, left ventricle diastolic diameter, creatinine level and beta-blocker therapy, and functional status (hazard ratio 1.45, 95% CI 1.09-1.69, P = .039). No difference regarding unplanned hospital admission was found. Conclusion We report on an inverse association between glycemia and mortality in outpatients with chronic heart failure. These results point to a new pathophysiologic understanding of the interactions between diabetes mellitus, hyperglycemia, and heart disease. (Am Heart J 2010; 159: 90-7.)
Resumo:
Diversity is one of the major characteristics of Brazil and all South America. This paper presents an overview of the current situation of the education of speech and language pathologists (SLP) and audiologists in Brazil and in several other countries of South America. This paper also discusses the main challenges shared by these countries. The discussion is focused on the mutual interferences between education and the areas of professional practice, cultural diversity and continued education. There are many emerging issues about the education of SLP and audiologists in South America. The suggested conclusion is that, despite the many differences, the South American SLP and audiologists` education would benefit from joint efforts and collaborative experiences. Copyright (C) 2010 S. Karger AG, Basel
Resumo:
Background: In Brazil hospital malnutrition is highly prevalent. physician awareness of malnutrition is low, and nutrition therapy is underprescribed. One alternative to approach this problem is to educate health care providers in clinical nutrition. The present study aims to evaluate the effect of an intensive education course given to health care professionals and students on the diagnosis ability concerning to hospital malnutrition. Materials and methods: An intervention study based on a clinical nutrition educational program, offered to medical and nursing students and professionals, was held in a hospital of the Amazon region. Participants were evaluated through improvement of diagnostic ability, according to agreement of malnutrition diagnosis using Subjective Global Assessment before and after the workshop, as compared to independent evaluations (Kappa Index, k). To evaluate the impact of the educational intervention on the hospital malnutrition diagnosis, medical records were reviewed for documentation of parameters associated with nutritional status of in-patients. The SPSS statistical software package was used for data analysis. Results: A total of 165 participants concluded the program. The majority (76.4%) were medical and nursing students. Malnutrition diagnosis improved after the course (before k = 0.5; after k = 0.64; p < 0.05). A reduction of false negatives from 50% to 33.3% was observed. During the course, concern of nutritional diagnosis was increased W = 17.57; p < 0.001) and even after the course, improvement on the height measurement was detected chi(2) 12.87;p < 0.001). Conclusions: Clinical nutrition education improved the ability of diagnosing malnutrition; however the primary impact was on medical and nursing students. To sustain diagnostic capacity a clinical nutrition program should be part of health professional curricula and be coupled with continuing education for health care providers.
Resumo:
Background: This study evaluated the impact of 2 models of educational intervention on rates of central venous catheter-associated bloodstream infections (CVC-BSIs). Methods: This was a prospective observational study conducted between January 2005 and June 2007 in 2 medical intensive care units (designated ICU A and ICU B) in a large teaching hospital. The study was divided into in 3 periods: baseline (only rates were evaluated), preintervention (questionnaire to evaluate knowledge of health care workers [HCWs] and observation of CVC care in both ICUs), and intervention (in ICU A, tailored, continuous intervention; in ICU B, a single lecture). The preintervention and intervention periods for each ICU were compared. Results: During the preintervention period, 940 CVC-days were evaluated in ICUA and 843 CVC-days were evaluated in ICU B. During the intervention period, 2175 CVC-days were evaluated in ICUA and 1694 CVC-days were evaluated in ICU B. Questions regarding CVC insertion, disinfection during catheter manipulation, and use of an alcohol-based product during dressing application were answered correctly by 70%-100% HCWs. Nevertheless, HCWs` adherence to these practices in the preintervention period was low for CVC handling and dressing, hand hygiene (6%-35%), and catheter hub disinfection (45%-68%). During the intervention period, HCWs` adherence to hand hygiene was 48%-98%, and adherence to hub disinfection was 82%-97%. CVC-BSI rates declined in both units. In ICUA, this decrease was progressive and sustained, from 12CVC-BSIs/1000 CVC-days at baseline to 0 after 9 months. In ICU B, the rate initially dropped from 16.2 to 0 CVC-BSIs/1000 CVC-days, but then increased to 13.7 CVC-BSIs/1000 CVC-days. Conclusion: Personal customized, continuous intervention seems to develop a ""culture of prevention"" and is more effective than single intervention, leading to a sustained reduction of infection rates.
Resumo:
Background: The high prevalence of subjective memory impairment (SMI) in the elderly living in developed countries may be partly dependent on greater demand placed on them by new technologies. As part of a comprehensive study on cognitive impairment in a population living in the Amazon rainforest, we evaluated the prevalence of SMI and investigated the features associated with it. Methods: We evaluated 163 subjects (82 females) with a mean age of 62.3 years (50-94 years), 110 of whom were illiterate, using the answer to a single question ""Do you have memory problems?"" to classify them into groups with or without SMI. The assessment involved application of the Mini-mental State Examination (MMSE), delayed recall from the Brief Cognitive Battery designed for the evaluation of low educated and illiterate individuals, the Patient Questionnaire (PQ) of the Primary Care Evaluation of Mental Disorders (PRIME-MD), and the Happiness Analogical Scale. Results: A very high prevalence of SMI (70%) was observed, exceeding rates reported by similar studies conducted in developed countries. SMI was more frequent in women, whereas age and education did not impact on prevalence. Subjects with SMI had significantly more somatic and psychiatric symptoms on the PQ, as well as lower means on the MMSE, but not on the delayed recall test. Multiple logistic regressions showed that the most important factor associated with the presence of SMI was a high score on the PQ (OR: 3.84, p = 0.011). Conclusion: Psychological and somatic symptoms may be the principal cause of SMI in this population.
Resumo:
Background: People with less education in Europe, Asia, and the United States are at higher risk of mortality associated with daily and longer-term air pollution exposure. We examined whether educational level modified associations between mortality and ambient particulate pollution (PM(10)) in Latin America, using several timescales. Methods: The study population included people who died during 1998-2002 in Mexico City, Mexico; Santiago, Chile; and Sao Paulo, Brazil. We fit city-specific robust Poisson regressions to daily deaths for nonexternal-cause mortality, and then stratified by age, sex, and educational attainment among adults older than age 21 years (none, some primary, some secondary, and high school degree or more). Predictor variables included a natural spline for temporal trend, linear PM(10) and apparent temperature at matching lags, and day-of-week indicators. We evaluated PM(10) for lags 0 and I day, and fit an unconstrained distributed lag model for cumulative 6-day effects. Results: The effects of a 10-mu g/m(3) increment in lag 1 PM(10) on all nonextemal-cause adult mortality were for Mexico City 0.39% (95% confidence interval = 0.131/-0.65%); Sao Paulo 1.04% (0.71%-1.38%); and for Santiago 0.61% (0.40%-0.83%. We found cumulative 6-day effects for adult mortality in Santiago (0.86% [0.48%-1.23%]) and Sao Paulo (1.38% [0.85%-1.91%]), but no consistent gradients by educational status. Conclusions: PM(10) had important short- and intermediate-term effects on mortality in these Latin American cities, but associations did not differ consistently by educational level.
Resumo:
Background: Current evidence suggests an inverse association between socioeconomic status and stroke incidence. Our aim was to measure the variation in incidence among different city districts (CD) and their association with socioeconomic variables. Methods: We prospectively ascertained all possible stroke cases occurring in the city of Joinville during the period 2005-2007. We determined the incidence for each of the 38 CD, age-adjusted to the population of Joinville. By linear regression analysis, we correlated incidence data with mean years of education (MYE) and mean income per month (MIPM). Results: Of the 1,734 stroke cases registered, 1,034 were first-ever strokes. In the study period, the crude incidence in Joinville was 69.5 per 100,000 (95% confidence interval, 65.3-73.9). The stroke incidence among CD ranged from 37.5 (22.2-64.6) to 151.0 per 100,000 (69.0-286.6). The stroke incidence was inversely correlated with years of education (r = -0.532; p<0.001). MYE and MIPM were strongly related (R = 0.958), resulting in exclusion of MIPM by collinearity. Conclusions: Years of education can explain a wide incidence variation among CD. These results may be useful to guide the allocation of resources in primary prevention policies. Copyright (C) 2011 S. Karger AG, Basel
Resumo:
Substance-dependence is highly associated with executive cognitive function (ECF) impairments. However. considering that it is difficult to assess ECF clinically, the aim of the present study was to examine the feasibility of a brief neuropsychological tool (the Frontal Assessment Battery FAB) to detect specific ECF impairments in a sample of substance-dependent individuals (SDI). Sixty-two subjects participated in this study. Thirty DSM-IV-diagnosed SDI, after 2 weeks of abstinence, and 32 healthy individuals (control group) were evaluated with FAD and other ECF-related tasks: digits forward (DF), digits backward (DB), Stroop Color Word Test (SCWT), and Wisconsin Card Sorting Test (WCST). SDI did not differ from the control group on sociodemographic variables or IQ. However, SDI performed below the controls in OF, DB, and FAB. The SDI were cognitively impaired in 3 of the 6 cognitive domains assessed by the FAB: abstract reasoning, motor programming, and cognitive flexibility. The FAB correlated with DF, SCWT, and WCST. In addition, some neuropsychological measures were correlated with the amount of alcohol, cannabis, and cocaine use. In conclusion, SDI performed more poorly than the comparison group on the FAB and the FAB`s results were associated with other ECF-related tasks. The results suggested a negative impact of alcohol, cannabis, and cocaine use on the ECF. The FAB may be useful in assisting professionals as an instrument to screen for ECF-related deficits in SDI. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The purpose of this research was to evaluate educational strategies applied to a tele-education leprosy course. The curriculum was for members of the Brazilian Family Health Team and was made available through the Sao Paulo Telehealth Portal. The course educational strategy was based on a constructivist learning model where interactivity was emphasized. Authors assessed motivational aspects of the course using the WebMAC Professional tool. Forty-eight healthcare professionals answered the evaluation questionnaire. Adequate internal consistency was achieved (Cronbach`s alpha = 0.79). More than 95% of queried items received good evaluations. Multidimensional analysis according to motivational groups of questions (STIMULATING, MEANINGFUL, ORGANIZED, EASY-TO-USE) showed high agreement. According to WebMAC`s criteria, it was considered an ""awesome course."" The tele-educational strategies implemented for leprosy disclosed high motivational scores.
Resumo:
Background: The Cambridge Cognitive Examination (CAMCOG) is a useful test in screening for Alzheimer`s disease (AD). However, the interpretation of CAMCOG cut-off scores is problematic and reference values are needed for different educational strata. Given the importance of earlier diagnoses of mild dementia, new cut-off values are required which take into account patients with low levels of education. This study aims to evaluate whether the CAMCOG can be used as an accurate screening test among AD patients and normal controls with different educational levels. Methods: Cross-sectional assessment was undertaken of 113 AD and 208 elderly controls with heterogeneous educational levels (group 1: 1-4 years; group 2: 5-8 years; and group 3: >= 9 years) from a geriatric clinic. submitted to a thorough diagnostic evaluation for AD including the Cambridge Examination for Mental Disorders of the Elderly (CAMDEX). Controls had no cognitive or mood complaints. Sensitivity (SE) and specificity (SP) for the CAMCOG in each educational group was assessed with receiver-operator-characteristic (ROC) curves. Results: CAMCOG mean values were lower when education was reduced in both diagnostic groups (controls - group 1: 87; group 2: 91; group 3: 96; AD - group 1: 63; group 2: 62; group 3: 77). Cutoff scores for the three education groups were 79, 80 and 90, respectively. SE and SP varied among the groups (group 1: 88.1% and 83.5%; group 2: 84.6% and 96%; group 3: 70.8% and 90%). Conclusion: The CAMCOG can be used as a cognitive test for patients with low educational level with good accuracy. Patients with higher education showed lower scores than previously reported.
Resumo:
Background: Cannabis is the most used illicit drug in the world, and its use has been associated with prefrontal cortex (PFC) dysfunction, including deficits in executive functions (EF). Considering that EF may influence treatment outcome, it would be interesting to have a brief neuropsychological battery to assess EF in chronic cannabis users (CCU). In the present study, the Frontal Assessment Battery (FAB), a brief, easy to use neuropsychological instrument aimed to evaluate EF, was used to evaluate cognitive functioning of CCU. Methods: We evaluated 107 abstinent CCU with the FAB and compared with 44 controls matched for age, estimated IQ, and years of education. Results: CCU performed poorly as compared to controls (FAB total score = 16.53 vs. 17.09, p .05). CCU had also a poor performance in the Motor Programming subtest (2.47 vs. 2.73, p .05). Conclusion: This study examined effects of cannabis in executive functioning and showed evidence that the FAB is sensitive to detect EF deficits in early abstinent chronic cannabis users. Clinical significance of these findings remains to be investigated in further longitudinal studies. FAB may be useful as a screening instrument to evaluate the necessity for a complete neuropsychological assessment in this population.
Resumo:
Background: Verbal fluency (VF) tasks are simple and efficient clinical tools to detect executive dysfunction and lexico-semantic impairment. VF tasks are widely used in patients with suspected dementia, but their accuracy for detection of mild cognitive impairment (MCI) is still under investigation. Schooling in particular may influence the subject`s performance. The aim of this study was to compare the accuracy of two semantic categories (animals and fruits) in discriminating controls, MCI patients and Alzheimer`s disease (AD) patients. Methods: 178 subjects, comprising 70 controls (CG), 70 MCI patients and 38 AD patients, were tested on two semantic VF tasks. The sample was divided into two schooling groups: those with 4-8 years of education and those with 9 or more years. Results: Both VF tasks - animal fluency (VFa) and fruits fluency (VFf) - adequately discriminated CG from AD in the total sample (AUC = 0.88 +/- 0.03, p < 0.0001) and in both education groups, and high educated MCI from AD (VFa: AUC = 0.82 +/- 0.05, p < 0.0001; VFf: AUC = 0.85 +/- 0.05, p < 0.0001). Both tasks were moderately accurate in discriminating CG from MCI (VFa: AUC = 0.68 +/- 0.04, p < 0.0001 - VFf:AUC = 0.73 +/- 0.04, p < 0.0001) regardless of the schooling level, and MCI from AD in the total sample (VFa: AUC = 0.74 +/- 0.05, p < 0.0001; VFf: AUC = 0.76 +/- 0.05, p < 0.0001). Neither of the two tasks differentiated low educated MCI from AD. In the total sample, fruits fluency best discriminated CG from MCI and MCI from AD; a combination of the two improved the discrimination between CG and AD. Conclusions: Both categories were similar in discriminating CG from AD; the combination of both categories improved the accuracy for this distinction. Both tasks were less accurate in discriminating CG from MCI, and MCI from AD.