994 resultados para Category
Resumo:
Objectives We studied the relationship between changes in body composition and changes in blood pressure levels. Background The mechanisms underlying the frequently observed progression from pre-hypertension to hypertension are poorly understood. Methods We examined 1,145 subjects from a population-based survey at baseline in 1994/1995 and at follow-up in 2004/2005. First, we studied individuals pre-hypertensive at baseline who, during 10 years of follow-up, either had normalized blood pressure (PreNorm, n = 48), persistently had pre-hypertension (PrePre, n = 134), or showed progression to hypertension (PreHyp, n = 183). In parallel, we studied predictors for changes in blood pressure category in individuals hypertensive at baseline (n = 429). Results After 10 years, the PreHyp group was characterized by a marked increase in body weight (+5.71% [95% confidence interval (CI): 4.60% to 6.83%]) that was largely the result of an increase in fat mass (+17.8% [95% CI: 14.5% to 21.0%]). In the PrePre group, both the increases in body weight (+1.95% [95% CI: 0.68% to 3.22%]) and fat mass (+8.09% [95% CI: 4.42% to 11.7%]) were significantly less pronounced than in the PreHyp group (p < 0.001 for both). The PreNorm group showed no significant change in body weight (-1.55% [95% CI: -3.70% to 0.61%]) and fat mass (+0.20% [95% CI: -6.13% to 6.52%], p < 0.05 for both, vs. the PrePre group). Conclusions After 10 years of follow-up, hypertension developed in 50.1% of individuals with pre-hypertension and only 6.76% went from hypertensive to pre-hypertensive blood pressure levels. An increase in body weight and fat mass was a risk factor for the development of sustained hypertension, whereas a decrease was predictive of a decrease in blood pressure. (J Am Coll Cardiol 2010; 56: 65-76) (C) 2010 by the American College of Cardiology Foundation
Resumo:
Background: Verbal fluency (VF) tasks are simple and efficient clinical tools to detect executive dysfunction and lexico-semantic impairment. VF tasks are widely used in patients with suspected dementia, but their accuracy for detection of mild cognitive impairment (MCI) is still under investigation. Schooling in particular may influence the subject`s performance. The aim of this study was to compare the accuracy of two semantic categories (animals and fruits) in discriminating controls, MCI patients and Alzheimer`s disease (AD) patients. Methods: 178 subjects, comprising 70 controls (CG), 70 MCI patients and 38 AD patients, were tested on two semantic VF tasks. The sample was divided into two schooling groups: those with 4-8 years of education and those with 9 or more years. Results: Both VF tasks - animal fluency (VFa) and fruits fluency (VFf) - adequately discriminated CG from AD in the total sample (AUC = 0.88 +/- 0.03, p < 0.0001) and in both education groups, and high educated MCI from AD (VFa: AUC = 0.82 +/- 0.05, p < 0.0001; VFf: AUC = 0.85 +/- 0.05, p < 0.0001). Both tasks were moderately accurate in discriminating CG from MCI (VFa: AUC = 0.68 +/- 0.04, p < 0.0001 - VFf:AUC = 0.73 +/- 0.04, p < 0.0001) regardless of the schooling level, and MCI from AD in the total sample (VFa: AUC = 0.74 +/- 0.05, p < 0.0001; VFf: AUC = 0.76 +/- 0.05, p < 0.0001). Neither of the two tasks differentiated low educated MCI from AD. In the total sample, fruits fluency best discriminated CG from MCI and MCI from AD; a combination of the two improved the discrimination between CG and AD. Conclusions: Both categories were similar in discriminating CG from AD; the combination of both categories improved the accuracy for this distinction. Both tasks were less accurate in discriminating CG from MCI, and MCI from AD.
Resumo:
Education significantly impacts cognitive performance of older adults even in the absence of dementia. Some cognitive tests seem less vulnerable to the influence of education and thus may be more suitable for cognitive assessment of older adults with heterogeneous backgrounds. The objective of this study was to investigate which tests in a cognitive battery were less influenced by educational levels in a sample of cognitively unimpaired older Brazilians. In addition, we evaluated the impact of very high educational levels on cognitive performance. The cognitive battery consisted of the Mini Mental State Examination (MMSE), Cambridge Cognitive Test (CAMCOG), Clock Drawing Test, Short Cognitive Performance Test (SKT), Rivermead Behavioural Memory Test (RBMT), Fuld Object Memory Evaluation (FOME), Verbal Fluency Test (VF) fruit category, Trail Making Test A and B, WAIS-R Vocabulary, and Block Design. Education did not exert a significant influence on the RBMT, FOME, and VF (p < .05). Subjects with very high educational levels had similar performance on the latter tests when compared with those with intermediate and low levels of education. In conclusion, the RBMT, FOME, and VF fruit category seem to be appropriate tools for the assessment of cognitive function in elderly Brazilians with varying degrees of educational attainment.
Resumo:
Background. Clinical manifestations of dengue vary in different areas of endemicity and between specific age groups, whereas predictors of outcome have remained controversial. In Brazil, the disease burden predominantly affects adults, with an increasing trend toward progression to dengue hemorrhagic fever (DHF) noted. Methods. A cohort of adults with confirmed cases of dengue was recruited in central Brazil in 2005. Patients were classified according to the severity of their disease. Associations of antibody responses, viremia levels (as determined by real-time polymerase chain reaction [PCR]), and serotypes (as determined by multiplex PCR) with disease severity were evaluated. Results. Of the 185 symptomatic patients > 14 years of age who had a confirmed case of dengue, 26.5% and 23.2% were classified as having intermediate dengue fever (DF)/ DHF (defined as internal hemorrhage, plasma leakage, manifested signs of shock, and/ or thrombocytopenia [platelet count, <= 50,000 platelets/mm(3)]) and DHF, respectively. The onset of intermediate DF/ DHF and DHF occurred at a late stage of disease, around the period of defervescence. Patients with DHF had abnormal liver enzyme levels, with a > 3-fold increase in aspartate aminotransferase level, compared with the range of values considered to be normal. Overall, 65% of patients presented with secondary infections with dengue virus, with such infection occurring in similar proportions of patients in each of the 3 disease category groups. Dengue virus serotype 3 (DV3) was the predominant serotype, and viremia was detected during and after defervescence among patients with DHF or intermediate DF/ DHF. Conclusions. Viremia was detected after defervescence in adult patients classified as having DHF or intermediate DF/ DHF. Secondary infection was not a predictor of severe clinical manifestation in adults with infected with the DV3 serotype.
Resumo:
Autopsy is a valuable tool in evaluating diagnostic accuracy. Solid malignancies may have a protracted presentation, and diagnosis frequently requires imaging and deep-sited biopsies; clinical and postmortem diagnosis discrepancies may occur in a high rate in these diseases. Here, we analyzed the occurrence of clinico-pathological discrepancies in the diagnoses of solid malignancies in a Brazilian academic hospital. We reviewed charts and autopsy reports of the patients that died from 2001 to 2003 with at least one solid neoplasm. Patients were classified in concordant and discordant cases regarding cancer diagnosis. Discordant cases were categorized in undiagnosed cases (no suspicion of cancer) and in misdiagnosed cases (clinical suspicion of cancer but incompletely diagnosed). Among the 264 patients with a single non-incidental solid neoplasm, the clinico-pathological discrepancy rate was 37.1%. Liver (22.5%), lung (19.4%), and pancreatic cancer (15.3%) were the most frequent malignancies in the discordant group. Misdiagnosis category comprised 68% of the discordant cases, i.e., there was no correct knowledge about the tumor primary site and/or the histological type during life. Our data show that a high rate of discrepancies occurs in solid malignancies. Autopsies may provide the basis for a better understanding of diagnostic deficiencies in different circumstances. (C) 2008 Elsevier GmbH. All rights reserved.
Resumo:
Background: Coffee consumption has been associated with a lower risk of diabetes, but little is known about the mechanisms responsible for this association, especially related to the time when coffee is consumed. Objective: We examined the long-term effect of coffee, globally and according to the accompanying meal, and of tea, chicory, and caffeine on type 2 diabetes risk. Design: This was a prospective cohort study including 69,532 French women, aged 41-72 y from the E3N/EPIC (Etude Epidemiologique aupres de Femmes de la Mutuelle Generale de l`Education Nationale/European Prospective Investigation into Cancer and Nutrition) cohort study, without diabetes at baseline. Food and drink intakes per meal were assessed by using a validated diet-history questionnaire in 1993-1995. Results: During a mean follow-up of 11 y, 1415 new cases of diabetes were identified. In multivariable Cox regression models, the hazard ratio in the highest category of coffee consumption [>= 3 cups (375 mL)/d] was 0.73 (95% CI: 0.61, 0.87; P for trend < 0.001), in comparison with no coffee consumption. This inverse association was restricted to coffee consumed at lunchtime (hazard ratio: 0.66; 95% CI: 0.57, 0.76) when comparing >1.1 cup (125 mL)/meal with no intake. At lunchtime, this inverse association was observed for both regular and decaffeinated coffee and for filtered and black coffee, with no effect of sweetening. Total caffeine intake was also associated with a statistically significantly lower risk of diabetes. Neither tea nor chicory consumption was associated with diabetes risk. Conclusions: Our data support an inverse association between coffee consumption and diabetes and suggest that the time of drinking coffee plays a distinct role in glucose metabolism. Am J Clin Nutr 2010; 91: 1002-12.
Resumo:
OBJECTIVE. To evaluate the effect of oral hygiene with 0.12% chlorhexidine gluconate on the incidence of nosocomial pneumonia and ventilator-associated pneumonia (VAP) in children undergoing cardiac surgery. DESIGN. Prospective, randomized, double-blind, placebo-controlled trial. SETTING. Pediatric intensive care unit (PICU) at a tertiary care hospital. patients. One hundred sixty children undergoing surgery for congenital heart disease, randomized into 2 groups: chlorhexidine (n = 87) and control (n = 73). INTERVENTIONS. Oral hygiene with 0.12% chlorhexidine gluconate or placebo preoperatively and twice a day postoperatively until PICU discharge or death. RESULTS. Patients in experimental and control groups had similar ages (median, 12.2 vs 10.8 months; P =. 72) and risk adjustment for congenital heart surgery 1 score distribution (66% in category 1 or 2 in both groups; P =. 17). The incidence of nosocomial pneumonia was 29.8% versus 24.6% (Pp. 46) and the incidence of VAP was 18.3% versus 15% (Pp. 57) in the chlorhexidine and the control group, respectively. There was no difference in intubation time (P =. 34), need for reintubation (P =. 37), time interval between hospitalization and nosocomial pneumonia diagnosis (P =. 63), time interval between surgery and nosocomial pneumonia diagnosis (P =. 10), and time on antibiotics (P =. 77) and vasoactive drugs (P =. 16) between groups. Median length of PICU stay (3 vs 4 days; P =. 53), median length of hospital stay (12 vs 11 days; P =. 67), and 28-day mortality (5.7% vs 6.8%; P =. 77) were also similar in the chlorhexidine and the control group. CONCLUSIONS. Oral hygiene with 0.12% chlorhexidine gluconate did not reduce the incidence of nosocomial pneumonia and VAP in children undergoing cardiac surgery.
Resumo:
Six of the short dietary questions used in the 1995 National Nutrition Survey (see box below) were evaluated for relative validity both directly and indirectly and for consistency, by documenting the differences in mean intakes of foods and nutrients as measured on the 24-hour recall, between groups with different responses to the short questions. 1. Including snacks, how many times do you usually have something to eat in a day including evenings? 2. How many days per week do you usually have something to eat for breakfast? 3. In the last 12 months, were there any times that you ran out of food and couldn’t afford to buy more? 4. What type of milk do you usually consume? 5. How many serves of vegetables do you usually eat each day? (a serve = 1/2 cup cooked vegetables or 1 cup of salad vegetables) 6. How many serves of fruit do you usually eat each day? (a serve = 1 medium piece or 2 small pieces of fruit or 1 cup of diced pieces) These comparisons were made for males and females overall and for population sub-groups of interest including: age, socio-economic disadvantage, region of residence, country of birth, and BMI category. Several limitations to this evaluation of the short questions, as discussed in the report, need to be kept in mind including: · The method for comparison available (24-hour recall) was not ideal (gold standard); as it measures yesterday’s intake. This limitation was overcome by examining only mean differences between groups of respondents, since mean intake for a group can provide a reasonable approximation for ‘usual’ intake. · The need to define and identify, post-hoc, from the 24-hour recall the number of eating occasions, and occasions identified by the respondents as breakfast. · Predetermined response categories for some of the questions effectively limited the number of categories available for evaluation. · Other foods and nutrients, not selected for this evaluation, may have an indirect relationship with the question, and might have shown stronger and more consistent responses. · The number of responses in some categories of the short questions eg for food security may have been too small to detect significant differences between population sub-groups. · No information was available to examine the validity of these questions for detecting differences over time (establishing trends) in food habits and indicators of selected nutrient intakes. By contrast, the strength of this evaluation was its very large sample size, (atypical of most validation studies of dietary assessment) and thus, the opportunity to investigate question performance in a range of broad population sub-groups compared with a well-conducted, quantified survey of intakes. The results of the evaluation are summarised below for each of the questions and specific recommendations for future testing, modifications and use provided for each question. The report concludes with some general recommendations for the further development and evaluation of short dietary questions.
Resumo:
Empirical studies on the impact of women’s paid jobs on their empowerment and welfare in the Bangladesh context are rare. The few studies on the issue to date have all been confined to the garment workers only although studies indicate that women’s workforce participation in Bangladesh has increased across-the-board. Besides, none of these studies has made an attempt to control for the non-working women and/or applied any statistical technique to control for the effects of other pertinent determinants of women’s empowerment and welfare such as education, age, religion and place of living. This study overcomes these drawbacks and presents alternative assessments of the link between women’s workforce participation and empowerment on the basis of survey data from the two largest cities in Bangladesh. While the generic assessment indicates that women’s paid jobs have positive implications for women’s participation in decisions on fertility, children’s education and healthcare as well as their possession and control of resources, the econometric assessment negates most of these observations. Women’s education, on the other hand, appears to be more important than their participation in the labour force. The study underlines the fact that by omitting other relevant explanatory variables from the analysis, the previous studies might have overestimated the impact of women’s paid work on their empowerment. Among other things, the paper also highlights the importance of women’s job category, religion and regional differences for women’s empowerment.
Resumo:
To analyse the gutta-percha filled area of C-shaped molar teeth root filled with the modified MicroSeal technique with reference to the radiographic features and the C-shaped canal configuration. Twenty-three mandibular second molar teeth with C-shaped roots were classified according to their radiographic features as: type I - merging, type II - symmetrical and type III - asymmetrical. The canals were root filled using a modified technique of the MicroSeal system. Horizontal sections at intervals of 600 mu m were made 1 mm from the apex to the subpulpal floor level. The percentage of gutta-percha area from the apical, middle and coronal levels of the radiographic types was analysed using the Kruskal-Wallis test. Complementary analysis of the C-shaped canal configurations (C1, C2 and C3) determined from cross-sections from the apical third was performed in a similar way. No significant differences were found between the radiographic types in terms of the percentage of gutta-percha area at any level (P > 0.05): apical third, type I: 77.04%, II: 70.48% and III: 77.13%, middle third, type I: 95.72%, II: 93.17%, III: 91.13% and coronal level, type I: 98.30%, II: 98.25%, III: 97.14%. Overall, the percentage of the filling material was lower in the apical third (P < 0.05). No significant differences were found between the C-shaped canal configurations apically; C1: 72.64%, C2: 79.62%, C3: 73.51% (P > 0.05). The percentage of area filled with gutta-percha was similar in the three radiographic types and canal configuration categories of C-shaped molars. These results show the difficulty of achieving predictable filling of the root canal system when this anatomical variation exists. In general, the apical third was less completely filled.
Resumo:
The present study examined the occupational aspirations of sixth-grade children in terms of occupational category, minimum education level and gender. In addition, the study identified the sources of occupational information used by the children and the factors they thought could influence them toward or away from a job. The study found that all of the children were able to express occupational aspirations. While the children obtained occupational information from a range of sources, including the media and family, the source most likely to influence them toward or away from choosing a job was family.
Resumo:
It can be said that some of the topics and ideas that command our interest or attention are autobiographical in origin. This paper subscribes to this category. In this paper, I present a perspective on preparing professional personnel, namely, educators, practitioners, teachers, student teachers, and researchers, for cultural inclusion. This perspective is drawn from my experiences as a former postgraduate student from a culturally diverse background preparing for a career in severe disabilities and as a university educator who is interested in ways to encourage professionals in the field to be more cognizant of the influence of their cultural backgrounds and the value of becoming culturally inclusive.
Resumo:
It has been hypothesized that the brain categorizes stressors and utilizes neural response pathways that vary in accordance with the assigned category. If this is true, stressors should elicit patterns of neuronal activation within the brain that are category-specific. Data from previous Immediate-early gene expression mapping studies have hinted that this is the case, but interstudy differences in methodology render conclusions tenuous. In the present study, immunolabelling for the expression of c-fos was used as a marker of neuronal activity elicited in the rat brain by haemorrhage, immune challenge, noise, restraint and forced swim. All stressors elicited c-fos expression in 25-30% of hypothalamic paraventricular nucleus corticotrophin-releasing-factor cells, suggesting that these stimuli were of comparable strength, at least with regard to their ability to activate the hypothalamic-pituitary-ad renal axis. In the amygdala, haemorrhage and immune challenge both elicited c-fos expression in a large number of neurons in the central nucleus of the amygdala, whereas noise, restraint and forced swim primarily elicited recruitment of cells within the medial nucleus of the amygdala. In the medulla, all stressors recruited similar numbers of noradrenergic (A1 and A2) and adrenergic (C1 and C2) cells. However, haemorrhage and immune challenge elicited c-fos expression In subpopulations of A1 and A2 noradrenergic cells that were significantly more rostral than those recruited by noise, restraint or forced swim. The present data support the suggestion that the brain recognizes at least two major categories of stressor, which we have referred to as 'physical' and 'psychological'. Moreover, the present data suggest that the neural activation footprint that is left in the brain by stressors can be used to determine the category to which they have been assigned by the brain.
Resumo:
To compare pathologic features of the cancers arising after different types of benign breast disease (BBD), we reviewed the invasive breast cancer slides of 169 women with a previous benign biopsy result. Lesions were categorized previously as nonproliferative, proliferative without atypia, or atypical hyperplasia. Pathologic features of the cancers were evaluated without knowledge of the previous BBD category. Estrogen and progesterone receptor immunohistochemistry was performed on available tissue blocks. The median times between a benign result and cancer were 100, 124, and 92 months for women with nonproliferative lesions, proliferative lesions without atypia, and atypical hyperplasia, respectively. Cancers in the 3 groups did not differ significantly in tumor size, axillary lymph node status, or histologic grade, and there was no significant difference in the distribution of histologic types of breast cancer. Lymphatic vessel invasion, extensive intraductal component, and hormone receptor status did not differ among BBD categories. The pathologic features of breast cancers that develop in women with a previous benign biopsy result do not vary according to the histologic category of the previous BBD.
Resumo:
The aim of the present study was to investigate the effect of high-pass filtering on TEOAE obtained from 2-month-old infants as a function of filter cut-off frequency, activity states and pass/fail status of infants. Two experiments were performed. In Experiment 1, 100 2-month-old infants (200 ears) in five activity states (asleep, awake but peaceful, sucking a pacifier, feeding, restless) were tested by use of TEOAE technology. Five different filter conditions were applied to the TEOAE responses post hoc. The filter conditions were set at 781 Hz (default setting), 1.0, 1.2, 1.4 and 1.6 kHz. Results from this experiment showed that TEOAE parameters, such as whole-wave reproducibility (WR) and signal-to-noise ratio (SNR) at 0.8 kHz and 1.6 kHz, changed as a function of the cut-off frequency. The findings suggest that the 1.6 kHz and 1.2 kHz filter conditions are optimal for WR and SNR pass/fail criteria, respectively. Although all infant recordings appeared to benefit from the filtering, infants in the noisy states seemed to benefit the most. In Experiment 2, the high-pass filtering technique was applied to 23 infants (35 ears) who apparently failed the TEOAE tests on initial screening but were subsequently awarded a pass status based on the results from a follow-up auditory brainstem response (ABR) assessment. The findings showed a significant decrease in noise contamination of the TEOAE with a corresponding significant increase in WR. With high-pass filtering at 1.6 kHz, 21/35 ears could be reclassified into the pass category.