53 resultados para Patient Education as Topic - methods
Resumo:
Background: In Brazil hospital malnutrition is highly prevalent. physician awareness of malnutrition is low, and nutrition therapy is underprescribed. One alternative to approach this problem is to educate health care providers in clinical nutrition. The present study aims to evaluate the effect of an intensive education course given to health care professionals and students on the diagnosis ability concerning to hospital malnutrition. Materials and methods: An intervention study based on a clinical nutrition educational program, offered to medical and nursing students and professionals, was held in a hospital of the Amazon region. Participants were evaluated through improvement of diagnostic ability, according to agreement of malnutrition diagnosis using Subjective Global Assessment before and after the workshop, as compared to independent evaluations (Kappa Index, k). To evaluate the impact of the educational intervention on the hospital malnutrition diagnosis, medical records were reviewed for documentation of parameters associated with nutritional status of in-patients. The SPSS statistical software package was used for data analysis. Results: A total of 165 participants concluded the program. The majority (76.4%) were medical and nursing students. Malnutrition diagnosis improved after the course (before k = 0.5; after k = 0.64; p < 0.05). A reduction of false negatives from 50% to 33.3% was observed. During the course, concern of nutritional diagnosis was increased W = 17.57; p < 0.001) and even after the course, improvement on the height measurement was detected chi(2) 12.87;p < 0.001). Conclusions: Clinical nutrition education improved the ability of diagnosing malnutrition; however the primary impact was on medical and nursing students. To sustain diagnostic capacity a clinical nutrition program should be part of health professional curricula and be coupled with continuing education for health care providers.
Resumo:
Background: This study evaluated the impact of 2 models of educational intervention on rates of central venous catheter-associated bloodstream infections (CVC-BSIs). Methods: This was a prospective observational study conducted between January 2005 and June 2007 in 2 medical intensive care units (designated ICU A and ICU B) in a large teaching hospital. The study was divided into in 3 periods: baseline (only rates were evaluated), preintervention (questionnaire to evaluate knowledge of health care workers [HCWs] and observation of CVC care in both ICUs), and intervention (in ICU A, tailored, continuous intervention; in ICU B, a single lecture). The preintervention and intervention periods for each ICU were compared. Results: During the preintervention period, 940 CVC-days were evaluated in ICUA and 843 CVC-days were evaluated in ICU B. During the intervention period, 2175 CVC-days were evaluated in ICUA and 1694 CVC-days were evaluated in ICU B. Questions regarding CVC insertion, disinfection during catheter manipulation, and use of an alcohol-based product during dressing application were answered correctly by 70%-100% HCWs. Nevertheless, HCWs` adherence to these practices in the preintervention period was low for CVC handling and dressing, hand hygiene (6%-35%), and catheter hub disinfection (45%-68%). During the intervention period, HCWs` adherence to hand hygiene was 48%-98%, and adherence to hub disinfection was 82%-97%. CVC-BSI rates declined in both units. In ICUA, this decrease was progressive and sustained, from 12CVC-BSIs/1000 CVC-days at baseline to 0 after 9 months. In ICU B, the rate initially dropped from 16.2 to 0 CVC-BSIs/1000 CVC-days, but then increased to 13.7 CVC-BSIs/1000 CVC-days. Conclusion: Personal customized, continuous intervention seems to develop a ""culture of prevention"" and is more effective than single intervention, leading to a sustained reduction of infection rates.
Resumo:
Conclusion: The cochlear implant was beneficial as an attempt to restore hearing and improve communication abilities in this patient with profound sensorineural hearing loss secondary to Susac syndrome. Objective: To report the audiological outcomes of cochlear implantation (CI) in a young woman with Susac syndrome after a 6-month follow-up period. Susac syndrome is a rare disorder. It is clinically characterized by a typical triad of sensorineural deafness, encephalopathy, and visual defect, due to microangiopathy involving the brain, inner ear, and retina. Methods: This was a retrospective review of a case at a tertiary referral center. After diagnosis, the patient was evaluated by a multidisciplinary team and received a cochlear implant in her right ear. Results: The patient achieved 100% open-set sentence recognition in noise conditions and 92% monosyllable and 68% medial consonant recognition in quiet conditions after 6 months of implant use. She reported the use of the telephone 3 months after activation.
Resumo:
Background: People with less education in Europe, Asia, and the United States are at higher risk of mortality associated with daily and longer-term air pollution exposure. We examined whether educational level modified associations between mortality and ambient particulate pollution (PM(10)) in Latin America, using several timescales. Methods: The study population included people who died during 1998-2002 in Mexico City, Mexico; Santiago, Chile; and Sao Paulo, Brazil. We fit city-specific robust Poisson regressions to daily deaths for nonexternal-cause mortality, and then stratified by age, sex, and educational attainment among adults older than age 21 years (none, some primary, some secondary, and high school degree or more). Predictor variables included a natural spline for temporal trend, linear PM(10) and apparent temperature at matching lags, and day-of-week indicators. We evaluated PM(10) for lags 0 and I day, and fit an unconstrained distributed lag model for cumulative 6-day effects. Results: The effects of a 10-mu g/m(3) increment in lag 1 PM(10) on all nonextemal-cause adult mortality were for Mexico City 0.39% (95% confidence interval = 0.131/-0.65%); Sao Paulo 1.04% (0.71%-1.38%); and for Santiago 0.61% (0.40%-0.83%. We found cumulative 6-day effects for adult mortality in Santiago (0.86% [0.48%-1.23%]) and Sao Paulo (1.38% [0.85%-1.91%]), but no consistent gradients by educational status. Conclusions: PM(10) had important short- and intermediate-term effects on mortality in these Latin American cities, but associations did not differ consistently by educational level.
Resumo:
Background: Current evidence suggests an inverse association between socioeconomic status and stroke incidence. Our aim was to measure the variation in incidence among different city districts (CD) and their association with socioeconomic variables. Methods: We prospectively ascertained all possible stroke cases occurring in the city of Joinville during the period 2005-2007. We determined the incidence for each of the 38 CD, age-adjusted to the population of Joinville. By linear regression analysis, we correlated incidence data with mean years of education (MYE) and mean income per month (MIPM). Results: Of the 1,734 stroke cases registered, 1,034 were first-ever strokes. In the study period, the crude incidence in Joinville was 69.5 per 100,000 (95% confidence interval, 65.3-73.9). The stroke incidence among CD ranged from 37.5 (22.2-64.6) to 151.0 per 100,000 (69.0-286.6). The stroke incidence was inversely correlated with years of education (r = -0.532; p<0.001). MYE and MIPM were strongly related (R = 0.958), resulting in exclusion of MIPM by collinearity. Conclusions: Years of education can explain a wide incidence variation among CD. These results may be useful to guide the allocation of resources in primary prevention policies. Copyright (C) 2011 S. Karger AG, Basel
Resumo:
Introduction. Few population-based studies in erectile dysfunction (ED) included subjects less than 40 years old and analyzed the several factors and consequences potentially associated with this condition. Aim. Evaluation of the prevalence of erectile dysfunction (ED) and associated factors in a sample of Brazilian men aged 18 to 40 years old. Methods. Cross-sectional study in which subjects were contacted in public places of 18 major Brazilian cities and interviewed using an anonymous questionnaire. Survey data were submitted to chi-squared, student`s t-test and logistic regression analyses. Main Outcome Measures. The data were collected by means of a self-administered questionnaire with 87 questions about sociodemographic variables, general health, habits and lifestyle-related factors, sexual behavior and sexual difficulties, including ED which was assessed by a single question. Results. Prevalence of ED in 1,947 men was 35.0% (73.7% mild, 26.3% moderate/complete). Greater frequency of ED was seen in subjects that never had information about sex, experienced difficulties in the beginning of sexual life and have never masturbated. ED was associated to lower level of education, but not to race, sexual orientation, employment or marital status. Also, no association was found between ED and smoking, alcoholism, obesity, sedentary life, diabetes, hypertension, cardiovascular disease, hyperlipidemia, depression or anxiety. ED caused negative impact in men`s self-esteem, interpersonal relationships, work and leisure activities, and in sexual life satisfaction. Less than 10% of men with ED had received medical treatment for this problem. Conclusions. Prevalence of ED in this young population was high, mostly of mild severity. Low education and psychosocial problems were associated to ED and, due probably to the sample subjects` young age, no association was found with organic problems. Measures in the fields of education and psychosocial difficulties prevention would have a positive impact in the control of erectile dysfunction in the young population. Martins FG, and Abdo CHN. Erectile dysfunction and correlated factors in Brazilian men aged 18-40 years. J Sex Med 2010;7:2166-2173.
Resumo:
Objective: This study evaluates whether a course that was designed for first-year psychiatric residents and that specifically addressed psychodynamic principles fostered residents` progress in knowledge, skills, and attitudes regarding these concepts. Methods: The course was given in the 2005 academic year to all residents (N = 18) in their first psychiatric postgraduate year at the Department and Institute of Psychiatry, University of Sao Paulo, Brazil. The residents were assessed in the first and last sessions of the course through a written test that was blindly rated by two independent judges. Residents were also interviewed to observe whether psychodynamic concepts had been integrated into actual practice. Their responses were subjected to content analysis. Significance was tested using analysis of variance or nonparametric tests when necessary. Agreement between the judges was tested using intraclass correlation coefficients. Results: The judges demonstrated a high level of agreement. The difference in mean scores before and after the course was such that the total score increased by a mean of 2.5 points (total test score was 10 points). Additionally, residents started to undergo personal psychotherapy after the course. They reported that this course had markedly improved their relationship with patients. They emphasized the opportunities for self-reflection and gaining insights into themselves and patient treatment issues. Conclusion: This initial study indicates that this educational method can effectively promote psychodynamic knowledge, skills, and appropriate attitudes for managing psychiatric outpatients among residents. The course was very well received by the residents, and a similar method can easily be instituted within other residency programs that pursue integrated teaching methods.
Resumo:
Background: Many factors have been associated with the onset and maintenance of depressive symptoms in later life, although this knowledge is yet to be translated into significant health gains for the population. This study gathered information about common modifiable and non-modifiable risk factors for depression with the aim of developing a practical probabilistic model of depression that can be used to guide risk reduction strategies. \Methods: A cross-sectional study was undertaken of 20,677 community-dwelling Australians aged 60 years or over in contact with their general practitioner during the preceding 12 months. Prevalent depression (minor or major) according to the Patient Health Questionnaire (PHQ-9) assessment was the main outcome of interest. Other measured exposures included self-reported age, gender, education, loss of mother or father before age 15 years, physical or sexual abuse before age 15 years, marital status, financial stress, social support, smoking and alcohol use, physical activity, obesity, diabetes, hypertension, and prevalent cardiovascular diseases, chronic respiratory diseases and cancer. Results: The mean age of participants was 71.7 +/- 7.6 years and 57.9% were women. Depression was present in 1665 (8.0%) of our subjects. Multivariate logistic regression showed depression was independently associated with age older than 75 years, childhood adverse experiences, adverse lifestyle practices (smoking, risk alcohol use, physical inactivity), intermediate health hazards (obesity, diabetes and hypertension), comorbid medical conditions (clinical history of coronary heart disease, stroke, asthma, chronic obstructive pulmonary disease, emphysema or cancers), and social or financial strain. We stratified the exposures to build a matrix that showed that the probability of depression increased progressively with the accumulation of risk factors, from less than 3% for those with no adverse factors to more than 80% for people reporting the maximum number of risk factors. Conclusions: Our probabilistic matrix can be used to estimate depression risk and to guide the introduction of risk reduction strategies. Future studies should now aim to clarify whether interventions designed to mitigate the impact of risk factors can change the prevalence and incidence of depression in later life.
Resumo:
Background: The Cambridge Cognitive Examination (CAMCOG) is a useful test in screening for Alzheimer`s disease (AD). However, the interpretation of CAMCOG cut-off scores is problematic and reference values are needed for different educational strata. Given the importance of earlier diagnoses of mild dementia, new cut-off values are required which take into account patients with low levels of education. This study aims to evaluate whether the CAMCOG can be used as an accurate screening test among AD patients and normal controls with different educational levels. Methods: Cross-sectional assessment was undertaken of 113 AD and 208 elderly controls with heterogeneous educational levels (group 1: 1-4 years; group 2: 5-8 years; and group 3: >= 9 years) from a geriatric clinic. submitted to a thorough diagnostic evaluation for AD including the Cambridge Examination for Mental Disorders of the Elderly (CAMDEX). Controls had no cognitive or mood complaints. Sensitivity (SE) and specificity (SP) for the CAMCOG in each educational group was assessed with receiver-operator-characteristic (ROC) curves. Results: CAMCOG mean values were lower when education was reduced in both diagnostic groups (controls - group 1: 87; group 2: 91; group 3: 96; AD - group 1: 63; group 2: 62; group 3: 77). Cutoff scores for the three education groups were 79, 80 and 90, respectively. SE and SP varied among the groups (group 1: 88.1% and 83.5%; group 2: 84.6% and 96%; group 3: 70.8% and 90%). Conclusion: The CAMCOG can be used as a cognitive test for patients with low educational level with good accuracy. Patients with higher education showed lower scores than previously reported.
Resumo:
BACKGROUND: Treatment recommendations have been developed for management of patients with chronic myeloid leukemia (CML). METHODS: A 30-item multiple-choice questionnaire was administered to 435 hematologists and oncohematologists in 16 Latin American countries. Physicians self-reported their diagnostic, therapeutic, and disease management strategies. RESULTS: Imatinib is available as initial therapy to 92% of physicians, and 42% of physicians have access to both second-generation tyrosine kinase inhibitors. Standard-dose imatinib is the preferred initial therapy for most patients, but 20% would manage a young patient initially with an allogeneic stem cell transplant from a sibling donor, and 10% would only offer hydroxyurea to an elderly patient. Seventy-two percent of responders perform routine cytogenetic analysis for monitoring patients on therapy, and 59% routinely use quantitative polymerase chain reaction. For patients who fail imatinib therapy, 61% would increase the dose of imatinib before considering change to a second-generation tyrosine kinase inhibitor, except for patients aged 60 years, for whom a switch to a second-generation tyrosine kinase inhibitor was the preferred choice. CONCLUSIONS: The answers to this survey provide insight into the management of patients with CML in Latin America. Some deviations from current recommendations were identified. Understanding the treatment patterns of patients with CML in broad population studies is important to identify needs and improve patient care. Cancer 2010;116:4991-5000. (C) 2070 American Cancer Society.
Resumo:
OBJECTIVE To evaluate the effect of the environment and the observer on the measurement of blood pressure (BP) as well as to compare home BP (HBP) and ambulatory BP (ABP) measurements in the diagnosis of white coat hypertension (WCH) and masked hypertension (MH) in children and adolescents with hypertension (HT). METHODS BP of 40 patients with HT (75% of which had secondary HT and were on antihypertensive medication), mean age 12.1 years was evaluated through casual measurements at the clinic and at the HT unit, HBP for 14 days with the OMRON HEM 705 CP monitor (Omron, Tokyo, Japan) and ABP performed with SPACELABS 90207 (Spacelabs, Redmond, WA), for 24 h. RESULTS HT was diagnosed at the doctor`s office by ABP and HBP in 30/40, 27/40, and 31/40 patients, respectively. Based on office BP and ABP, 60% of patients were normotensive, 17.5% HT, 7.5% had WCH, and 15% had MH, whereas based on office BP and HBP 65, 12.5, 10, and 12.5% of patients were classified according to these diagnoses, respectively. There was considerable diagnostic agreement of HT by ABP and HBP (McNemar test, P < 0.01) (kappa = 0.56). CONCLUSION In hypertensive children and adolescents, HBP and ABP present comparable results. HBP appears to be a useful diagnostic test for the detection of MH and WCH in pediatric patients.
Resumo:
OBJECTIVE. The purpose of the study was to investigate patient characteristics associated with image quality and their impact on the diagnostic accuracy of MDCT for the detection of coronary artery stenosis. MATERIALS AND METHODS. Two hundred ninety-one patients with a coronary artery calcification (CAC) score of <= 600 Agatston units (214 men and 77 women; mean age, 59.3 +/- 10.0 years [SD]) were analyzed. An overall image quality score was derived using an ordinal scale. The accuracy of quantitative MDCT to detect significant (>= 50%) stenoses was assessed using quantitative coronary angiography (QCA) per patient and per vessel using a modified 19-segment model. The effect of CAC, obesity, heart rate, and heart rate variability on image quality and accuracy were evaluated by multiple logistic regression. Image quality and accuracy were further analyzed in subgroups of significant predictor variables. Diagnostic analysis was determined for image quality strata using receiver operating characteristic (ROC) curves. RESULTS. Increasing body mass index (BMI) (odds ratio [OR] = 0.89, p < 0.001), increasing heart rate (OR = 0.90, p < 0.001), and the presence of breathing artifact (OR = 4.97, p = 0.001) were associated with poorer image quality whereas sex, CAC score, and heart rate variability were not. Compared with examinations of white patients, studies of black patients had significantly poorer image quality (OR = 0.58, p = 0.04). At a vessel level, CAC score (10 Agatston units) (OR = 1.03, p = 0.012) and patient age (OR = 1.02, p = 0.04) were significantly associated with the diagnostic accuracy of quantitative MDCT compared with QCA. A trend was observed in differences in the areas under the ROC curves across image quality strata at the vessel level (p = 0.08). CONCLUSION. Image quality is significantly associated with patient ethnicity, BMI, mean scan heart rate, and the presence of breathing artifact but not with CAC score at a patient level. At a vessel level, CAC score and age were associated with reduced diagnostic accuracy.
Resumo:
Background: Verbal fluency (VF) tasks are simple and efficient clinical tools to detect executive dysfunction and lexico-semantic impairment. VF tasks are widely used in patients with suspected dementia, but their accuracy for detection of mild cognitive impairment (MCI) is still under investigation. Schooling in particular may influence the subject`s performance. The aim of this study was to compare the accuracy of two semantic categories (animals and fruits) in discriminating controls, MCI patients and Alzheimer`s disease (AD) patients. Methods: 178 subjects, comprising 70 controls (CG), 70 MCI patients and 38 AD patients, were tested on two semantic VF tasks. The sample was divided into two schooling groups: those with 4-8 years of education and those with 9 or more years. Results: Both VF tasks - animal fluency (VFa) and fruits fluency (VFf) - adequately discriminated CG from AD in the total sample (AUC = 0.88 +/- 0.03, p < 0.0001) and in both education groups, and high educated MCI from AD (VFa: AUC = 0.82 +/- 0.05, p < 0.0001; VFf: AUC = 0.85 +/- 0.05, p < 0.0001). Both tasks were moderately accurate in discriminating CG from MCI (VFa: AUC = 0.68 +/- 0.04, p < 0.0001 - VFf:AUC = 0.73 +/- 0.04, p < 0.0001) regardless of the schooling level, and MCI from AD in the total sample (VFa: AUC = 0.74 +/- 0.05, p < 0.0001; VFf: AUC = 0.76 +/- 0.05, p < 0.0001). Neither of the two tasks differentiated low educated MCI from AD. In the total sample, fruits fluency best discriminated CG from MCI and MCI from AD; a combination of the two improved the discrimination between CG and AD. Conclusions: Both categories were similar in discriminating CG from AD; the combination of both categories improved the accuracy for this distinction. Both tasks were less accurate in discriminating CG from MCI, and MCI from AD.
Resumo:
Background: Dementia screening in elderly people with low education can be difficult to implement. For these subjects, informant reports using the long (L) (26 items) and short (C) (16 items) versions of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) can be useful. The objective of the present study was to investigate the performance of Brazilian versions of the IQCODE L, S and a new short version (SBr) (15 items) in comparison with the Mini-mental State Examination (MMSE) for dementia screening in elderly people with low education. Methods: Thirty-four patients with mild to moderate dementia, diagnosed according to ICD-10 criteria, and 57 controls were evaluated and divided into three groups based on their socioeconomic status and level of education. Patients were evaluated using the MMSE and the informants were interviewed using the IQCODE by interviewers blind to the clinical diagnosis. Results: Education was correlated with MMSE results (r = 0.280, p = 0.031), but not with the versions of the IQCODE. The performance of the instruments, evaluated by the ROC curves, was very similar, with good internal consistency (Cronbach`s alpha = 0.97). MMSE correctly classified 85.7% of the subjects while the three IQCODE versions (L, S and SBr) correctly classified 91.2% of the subjects. Conclusions: The long, short and the new short Brazilian IQCODE versions can be useful as a screening tool for mild and moderate patients with dementia in Brazil. The IQCODE is not biased by schooling, and it seems to be an adequate instrument for samples with low levels of education.
Resumo:
Background The CAMCOG is a brief neuropsychological battery designed to assess global cognitive function and ascertain the impairments that are required for the diagnosis of dementia. To date, the cut-off scores for mild cognitive impairment (MCI) have not been determined. Given the need for an earlier diagnosis of mild dementia, new cut-off values are also necessary, taking into account cultural and educational effects. Methods One hundred and fifty-seven older adults (mean age: 69.6 +/- 7.4 years) with 8 or more years of formal education (mean years of schooling 14.2 +/- 3.8) attending a memory clinic at the Institute of Psychiatry University of Sao Paulo were included. Subjects were divided into three groups according to their cognitive status, established through clinical and neuropsychological assessment: normal controls, n = 62; MCI, n = 65; and mild or moderate dementia, n = 30. ROC curve analyses were performed for dementia vs controls, MCI vs controls and MCI vs dementia. Results The cut-off values were: 92/93 for dementia is controls (AUC = 0.99: sensitivity: 100%, specificity: 95%); 95/96 for MCI vs controls (AUC = 0.83, sensitivity: 64%, specificity: 88%), and 85/86 for MCI vs dementia (AUC = 0.91, sensitivity: 81%, specificity: 88%). The total CAMCOG score was more accurate than its subtests Mini-mental State Examination, Verbal Fluency Test and Clock Drawing Test when used separately. Conclusions The CAMCOG discriminated controls and MCI from demented patients, but was less accurate to discriminate MCI from controls. The best cut-off value to differentiate controls and demented was higher than suggested in the original publication, probably because only cases of mild to moderate dementia were included. This is important given the need for a diagnostic at earlier stages of Alzheimer`s disease. Copyright (C) 2008 John Wiley & Sons, Ltd.