987 resultados para lexical category
Resumo:
A full set of (higher-order) Casimir invariants for the Lie algebra gl(infinity) is constructed and shown to be well defined in the category O-FS generated by the highest weight (unitarizable) irreducible representations with only a finite number of nonzero weight components. Moreover, the eigenvalues of these Casimir invariants are determined explicitly in terms of the highest weight. Characteristic identities satisfied by certain (infinite) matrices with entries from gl(infinity) are also determined and generalize those previously obtained for gl(n) by Bracken and Green [A. J. Bracken and H. S. Green, J. Math. Phys. 12, 2099 (1971); H. S. Green, ibid. 12, 2106 (1971)]. (C) 1997 American Institute of Physics.
Resumo:
Objective: To assess, in patients undergoing glossectomy, the influence of the palatal augmentation prosthesis on the speech intelligibility and acoustic spectrographic characteristics of the formants of oral vowels in Brazilian Portuguese, specifically the first 3 formants (F1 [/a,e,u/], F2 [/o,o,u/], and F3 [/a,o/]). Design: Speech evaluation with and without a palatal augmentation prosthesis using blinded randomized listener judgments. Setting: Tertiary referral center. Patients: Thirty-six patients (33 men and 3 women) aged 30 to 80 (mean [SD], 53.9 [10.5]) years underwent glossectomy (14, total glossectomy; 12, total glossectomy and partial mandibulectomy; 6, hemiglossectomy; and 4, subtotal glossectomy) with use of the augmentation prosthesis for at least 3 months before inclusion in the study. Main Outcome Measures: Spontaneous speech intel-ligibility (assessed by expert listeners using a 4-category scale) and spectrographic formants assessment. Results: We found a statistically significant improvement of spontaneous speech intelligibility and the average number of correctly identified syllables with the use of the prosthesis (P < .05). Statistically significant differences occurred for the F1 values of the vowels /a,e,u/; for F2 values, there was a significant difference of the vowels /o,o,u/; and for F3 values, there was a significant difference of the vowels la,61 (P < .001). Conclusions: The palatal augmentation prosthesis improved the intelligibility of spontaneous speech and syllables for patients who underwent glossectomy. It also increased the F2 and F3 values for all vowels and the F I values for the vowels /o,o,u/. This effect brought the values of many vowel formants closer to normal.
Resumo:
Objective: To evaluate whether including children with onset of symptoms between ages 7 and 12 years in the ADHD diagnostic category would: (a) increase the prevalence of the disorder at age 12, and (b) change the clinical and cognitive features, impairment profile, and risk factors for ADHD compared with findings in the literature based on the DSM-IV definition of the disorder. Method: A birth cohort of 2,232 British children was prospectively evaluated at ages 7 and 12 years for ADHD using information from mothers and teachers. The prevalence of diagnosed ADHD at age 12 was evaluated with and without the inclusion of individuals who met DSM-IV age-of-onset criterion through mothers` or teachers` reports of symptoms at age 7. Children with onset of ADHD symptoms before versus after age 7 were compared on their clinical and cognitive features, impairment profile, and risk factors for ADHD. Results: Extending the age-of-onset criterion to age 12 resulted in a negligible increase in ADHD prevalence by age 12 years of 0.1%. Children who first manifested ADHD symptoms between ages 7 and 12 did not present correlates or risk factors that were significantly different from children who manifested symptoms before age 7. Conclusions: Results from this prospective birth cohort might suggest that adults who are able to report symptom onset by age 12 also had symptoms by age 7, even if they are not able to report them. The data suggest that the prevalence estimate, correlates and risk factors of ADHD will not be affected if the new diagnostic scheme extends the age-of-onset criterion to age 12. J. Am. Acad. Child Adolesc. Psychiatry, 2010;49(3):210-216.
Resumo:
Context: There is limited information on the prevalence and correlates of bipolar spectrum disorder in international population-based studies using common methods. Objectives: To describe the prevalence, impact, patterns of comorbidity, and patterns of service utilization for bipolar spectrum disorder (BPS) in the World Health Organization World Mental Health Survey Initiative. Design, Setting, and Participants: Crosssectional, face-to-face, household surveys of 61 392 community adults in 11 countries in the Americas, Europe, and Asia assessed with the World Mental Health version of the World Health Organization Composite International Diagnostic Interview, version 3.0, a fully structured, lay-administered psychiatric diagnostic interview. Main Outcome Measures: Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) disorders, severity, and treatment. Results: The aggregate lifetime prevalences were 0.6% for bipolar type I disorder (BP-I), 0.4% for BP-II, 1.4% for subthreshold BP, and 2.4% for BPS. Twelve-month prevalences were 0.4% for BP-I, 0.3% for BP-II, 0.8% for subthreshold BP, and 1.5% for BPS. Severity of both manic and depressive symptoms as well as suicidal behavior increased monotonically from subthreshold BP to BP-I. By contrast, role impairment was similar across BP subtypes. Symptom severity was greater for depressive episodes than manic episodes, with approximately 74.0% of respondents with depression and 50.9% of respondents with mania reporting severe role impairment. Three-quarters of those with BPS met criteria for at least 1 other disorder, with anxiety disorders (particularly panic attacks) being the most common comorbid condition. Less than half of those with lifetime BPS received mental health treatment, particularly in low-income countries, where only 25.2% reported contact with the mental health system. Conclusions: Despite cross-site variation in the prevalence rates of BPS, the severity, impact, and patterns of comorbidity were remarkably similar internationally. The uniform increases in clinical correlates, suicidal behavior, and comorbidity across each diagnostic category provide evidence for the validity of the concept of BPS. Treatment needs for BPS are often unmet, particularly in low-income countries.
Resumo:
One of the challenges in screening for dementia in developing countries is related to performance differences due to educational and cultural factors. This study evaluated the accuracy of single screening tests as well as combined protocols including the Mini-Mental State Examination (MMSE), Verbal Fluency animal category (VF), Clock Drawing test (CDT), and Pfeffer Functional Activities Questionnaire (PFAQ) to discriminate illiterate elderly with and without Alzheimer`s disease (AD) in a clinical sample. Cross-sectional study with 66 illiterate outpatients diagnosed with mild and moderate AD and 40 illiterate normal controls. Diagnosis of AD was based on NINCDS-ADRDA. All patients were submitted to a diagnostic protocol including a clinical interview based on the CAMDEX sections. ROC curves area analyses were carried out to compare sensitivity and specificity for the cognitive tests to differentiate the two groups (each test separately and in two by two combinations). Scores for all cognitive (MMSE, CDT, VF) and functional assessments (PFAQ) were significantly different between the two groups (p < 0.001). The best screening instruments for this sample of illiterate elderly were the MMSE and the PFAQ. The cut-off scores for the MMSE, VF, CDT, and PFAQ were 17.5, 7.5, 2.5, and 11.5, respectively. The most sensitive combination came from the MMSE and PFAQ (94.1%), and the best specificity was observed with the combination of the MMSE and CDT (89%). Illiterate patients can be successfully screened for AD using well-known screening instruments, especially in combined protocols.
Resumo:
Objectives We studied the relationship between changes in body composition and changes in blood pressure levels. Background The mechanisms underlying the frequently observed progression from pre-hypertension to hypertension are poorly understood. Methods We examined 1,145 subjects from a population-based survey at baseline in 1994/1995 and at follow-up in 2004/2005. First, we studied individuals pre-hypertensive at baseline who, during 10 years of follow-up, either had normalized blood pressure (PreNorm, n = 48), persistently had pre-hypertension (PrePre, n = 134), or showed progression to hypertension (PreHyp, n = 183). In parallel, we studied predictors for changes in blood pressure category in individuals hypertensive at baseline (n = 429). Results After 10 years, the PreHyp group was characterized by a marked increase in body weight (+5.71% [95% confidence interval (CI): 4.60% to 6.83%]) that was largely the result of an increase in fat mass (+17.8% [95% CI: 14.5% to 21.0%]). In the PrePre group, both the increases in body weight (+1.95% [95% CI: 0.68% to 3.22%]) and fat mass (+8.09% [95% CI: 4.42% to 11.7%]) were significantly less pronounced than in the PreHyp group (p < 0.001 for both). The PreNorm group showed no significant change in body weight (-1.55% [95% CI: -3.70% to 0.61%]) and fat mass (+0.20% [95% CI: -6.13% to 6.52%], p < 0.05 for both, vs. the PrePre group). Conclusions After 10 years of follow-up, hypertension developed in 50.1% of individuals with pre-hypertension and only 6.76% went from hypertensive to pre-hypertensive blood pressure levels. An increase in body weight and fat mass was a risk factor for the development of sustained hypertension, whereas a decrease was predictive of a decrease in blood pressure. (J Am Coll Cardiol 2010; 56: 65-76) (C) 2010 by the American College of Cardiology Foundation
Resumo:
Background: Verbal fluency (VF) tasks are simple and efficient clinical tools to detect executive dysfunction and lexico-semantic impairment. VF tasks are widely used in patients with suspected dementia, but their accuracy for detection of mild cognitive impairment (MCI) is still under investigation. Schooling in particular may influence the subject`s performance. The aim of this study was to compare the accuracy of two semantic categories (animals and fruits) in discriminating controls, MCI patients and Alzheimer`s disease (AD) patients. Methods: 178 subjects, comprising 70 controls (CG), 70 MCI patients and 38 AD patients, were tested on two semantic VF tasks. The sample was divided into two schooling groups: those with 4-8 years of education and those with 9 or more years. Results: Both VF tasks - animal fluency (VFa) and fruits fluency (VFf) - adequately discriminated CG from AD in the total sample (AUC = 0.88 +/- 0.03, p < 0.0001) and in both education groups, and high educated MCI from AD (VFa: AUC = 0.82 +/- 0.05, p < 0.0001; VFf: AUC = 0.85 +/- 0.05, p < 0.0001). Both tasks were moderately accurate in discriminating CG from MCI (VFa: AUC = 0.68 +/- 0.04, p < 0.0001 - VFf:AUC = 0.73 +/- 0.04, p < 0.0001) regardless of the schooling level, and MCI from AD in the total sample (VFa: AUC = 0.74 +/- 0.05, p < 0.0001; VFf: AUC = 0.76 +/- 0.05, p < 0.0001). Neither of the two tasks differentiated low educated MCI from AD. In the total sample, fruits fluency best discriminated CG from MCI and MCI from AD; a combination of the two improved the discrimination between CG and AD. Conclusions: Both categories were similar in discriminating CG from AD; the combination of both categories improved the accuracy for this distinction. Both tasks were less accurate in discriminating CG from MCI, and MCI from AD.
Resumo:
Education significantly impacts cognitive performance of older adults even in the absence of dementia. Some cognitive tests seem less vulnerable to the influence of education and thus may be more suitable for cognitive assessment of older adults with heterogeneous backgrounds. The objective of this study was to investigate which tests in a cognitive battery were less influenced by educational levels in a sample of cognitively unimpaired older Brazilians. In addition, we evaluated the impact of very high educational levels on cognitive performance. The cognitive battery consisted of the Mini Mental State Examination (MMSE), Cambridge Cognitive Test (CAMCOG), Clock Drawing Test, Short Cognitive Performance Test (SKT), Rivermead Behavioural Memory Test (RBMT), Fuld Object Memory Evaluation (FOME), Verbal Fluency Test (VF) fruit category, Trail Making Test A and B, WAIS-R Vocabulary, and Block Design. Education did not exert a significant influence on the RBMT, FOME, and VF (p < .05). Subjects with very high educational levels had similar performance on the latter tests when compared with those with intermediate and low levels of education. In conclusion, the RBMT, FOME, and VF fruit category seem to be appropriate tools for the assessment of cognitive function in elderly Brazilians with varying degrees of educational attainment.
Resumo:
Background. Clinical manifestations of dengue vary in different areas of endemicity and between specific age groups, whereas predictors of outcome have remained controversial. In Brazil, the disease burden predominantly affects adults, with an increasing trend toward progression to dengue hemorrhagic fever (DHF) noted. Methods. A cohort of adults with confirmed cases of dengue was recruited in central Brazil in 2005. Patients were classified according to the severity of their disease. Associations of antibody responses, viremia levels (as determined by real-time polymerase chain reaction [PCR]), and serotypes (as determined by multiplex PCR) with disease severity were evaluated. Results. Of the 185 symptomatic patients > 14 years of age who had a confirmed case of dengue, 26.5% and 23.2% were classified as having intermediate dengue fever (DF)/ DHF (defined as internal hemorrhage, plasma leakage, manifested signs of shock, and/ or thrombocytopenia [platelet count, <= 50,000 platelets/mm(3)]) and DHF, respectively. The onset of intermediate DF/ DHF and DHF occurred at a late stage of disease, around the period of defervescence. Patients with DHF had abnormal liver enzyme levels, with a > 3-fold increase in aspartate aminotransferase level, compared with the range of values considered to be normal. Overall, 65% of patients presented with secondary infections with dengue virus, with such infection occurring in similar proportions of patients in each of the 3 disease category groups. Dengue virus serotype 3 (DV3) was the predominant serotype, and viremia was detected during and after defervescence among patients with DHF or intermediate DF/ DHF. Conclusions. Viremia was detected after defervescence in adult patients classified as having DHF or intermediate DF/ DHF. Secondary infection was not a predictor of severe clinical manifestation in adults with infected with the DV3 serotype.
Resumo:
Autopsy is a valuable tool in evaluating diagnostic accuracy. Solid malignancies may have a protracted presentation, and diagnosis frequently requires imaging and deep-sited biopsies; clinical and postmortem diagnosis discrepancies may occur in a high rate in these diseases. Here, we analyzed the occurrence of clinico-pathological discrepancies in the diagnoses of solid malignancies in a Brazilian academic hospital. We reviewed charts and autopsy reports of the patients that died from 2001 to 2003 with at least one solid neoplasm. Patients were classified in concordant and discordant cases regarding cancer diagnosis. Discordant cases were categorized in undiagnosed cases (no suspicion of cancer) and in misdiagnosed cases (clinical suspicion of cancer but incompletely diagnosed). Among the 264 patients with a single non-incidental solid neoplasm, the clinico-pathological discrepancy rate was 37.1%. Liver (22.5%), lung (19.4%), and pancreatic cancer (15.3%) were the most frequent malignancies in the discordant group. Misdiagnosis category comprised 68% of the discordant cases, i.e., there was no correct knowledge about the tumor primary site and/or the histological type during life. Our data show that a high rate of discrepancies occurs in solid malignancies. Autopsies may provide the basis for a better understanding of diagnostic deficiencies in different circumstances. (C) 2008 Elsevier GmbH. All rights reserved.
Resumo:
Background: Coffee consumption has been associated with a lower risk of diabetes, but little is known about the mechanisms responsible for this association, especially related to the time when coffee is consumed. Objective: We examined the long-term effect of coffee, globally and according to the accompanying meal, and of tea, chicory, and caffeine on type 2 diabetes risk. Design: This was a prospective cohort study including 69,532 French women, aged 41-72 y from the E3N/EPIC (Etude Epidemiologique aupres de Femmes de la Mutuelle Generale de l`Education Nationale/European Prospective Investigation into Cancer and Nutrition) cohort study, without diabetes at baseline. Food and drink intakes per meal were assessed by using a validated diet-history questionnaire in 1993-1995. Results: During a mean follow-up of 11 y, 1415 new cases of diabetes were identified. In multivariable Cox regression models, the hazard ratio in the highest category of coffee consumption [>= 3 cups (375 mL)/d] was 0.73 (95% CI: 0.61, 0.87; P for trend < 0.001), in comparison with no coffee consumption. This inverse association was restricted to coffee consumed at lunchtime (hazard ratio: 0.66; 95% CI: 0.57, 0.76) when comparing >1.1 cup (125 mL)/meal with no intake. At lunchtime, this inverse association was observed for both regular and decaffeinated coffee and for filtered and black coffee, with no effect of sweetening. Total caffeine intake was also associated with a statistically significantly lower risk of diabetes. Neither tea nor chicory consumption was associated with diabetes risk. Conclusions: Our data support an inverse association between coffee consumption and diabetes and suggest that the time of drinking coffee plays a distinct role in glucose metabolism. Am J Clin Nutr 2010; 91: 1002-12.
Resumo:
OBJECTIVE. To evaluate the effect of oral hygiene with 0.12% chlorhexidine gluconate on the incidence of nosocomial pneumonia and ventilator-associated pneumonia (VAP) in children undergoing cardiac surgery. DESIGN. Prospective, randomized, double-blind, placebo-controlled trial. SETTING. Pediatric intensive care unit (PICU) at a tertiary care hospital. patients. One hundred sixty children undergoing surgery for congenital heart disease, randomized into 2 groups: chlorhexidine (n = 87) and control (n = 73). INTERVENTIONS. Oral hygiene with 0.12% chlorhexidine gluconate or placebo preoperatively and twice a day postoperatively until PICU discharge or death. RESULTS. Patients in experimental and control groups had similar ages (median, 12.2 vs 10.8 months; P =. 72) and risk adjustment for congenital heart surgery 1 score distribution (66% in category 1 or 2 in both groups; P =. 17). The incidence of nosocomial pneumonia was 29.8% versus 24.6% (Pp. 46) and the incidence of VAP was 18.3% versus 15% (Pp. 57) in the chlorhexidine and the control group, respectively. There was no difference in intubation time (P =. 34), need for reintubation (P =. 37), time interval between hospitalization and nosocomial pneumonia diagnosis (P =. 63), time interval between surgery and nosocomial pneumonia diagnosis (P =. 10), and time on antibiotics (P =. 77) and vasoactive drugs (P =. 16) between groups. Median length of PICU stay (3 vs 4 days; P =. 53), median length of hospital stay (12 vs 11 days; P =. 67), and 28-day mortality (5.7% vs 6.8%; P =. 77) were also similar in the chlorhexidine and the control group. CONCLUSIONS. Oral hygiene with 0.12% chlorhexidine gluconate did not reduce the incidence of nosocomial pneumonia and VAP in children undergoing cardiac surgery.
Resumo:
Six of the short dietary questions used in the 1995 National Nutrition Survey (see box below) were evaluated for relative validity both directly and indirectly and for consistency, by documenting the differences in mean intakes of foods and nutrients as measured on the 24-hour recall, between groups with different responses to the short questions. 1. Including snacks, how many times do you usually have something to eat in a day including evenings? 2. How many days per week do you usually have something to eat for breakfast? 3. In the last 12 months, were there any times that you ran out of food and couldn’t afford to buy more? 4. What type of milk do you usually consume? 5. How many serves of vegetables do you usually eat each day? (a serve = 1/2 cup cooked vegetables or 1 cup of salad vegetables) 6. How many serves of fruit do you usually eat each day? (a serve = 1 medium piece or 2 small pieces of fruit or 1 cup of diced pieces) These comparisons were made for males and females overall and for population sub-groups of interest including: age, socio-economic disadvantage, region of residence, country of birth, and BMI category. Several limitations to this evaluation of the short questions, as discussed in the report, need to be kept in mind including: · The method for comparison available (24-hour recall) was not ideal (gold standard); as it measures yesterday’s intake. This limitation was overcome by examining only mean differences between groups of respondents, since mean intake for a group can provide a reasonable approximation for ‘usual’ intake. · The need to define and identify, post-hoc, from the 24-hour recall the number of eating occasions, and occasions identified by the respondents as breakfast. · Predetermined response categories for some of the questions effectively limited the number of categories available for evaluation. · Other foods and nutrients, not selected for this evaluation, may have an indirect relationship with the question, and might have shown stronger and more consistent responses. · The number of responses in some categories of the short questions eg for food security may have been too small to detect significant differences between population sub-groups. · No information was available to examine the validity of these questions for detecting differences over time (establishing trends) in food habits and indicators of selected nutrient intakes. By contrast, the strength of this evaluation was its very large sample size, (atypical of most validation studies of dietary assessment) and thus, the opportunity to investigate question performance in a range of broad population sub-groups compared with a well-conducted, quantified survey of intakes. The results of the evaluation are summarised below for each of the questions and specific recommendations for future testing, modifications and use provided for each question. The report concludes with some general recommendations for the further development and evaluation of short dietary questions.
Resumo:
Empirical studies on the impact of women’s paid jobs on their empowerment and welfare in the Bangladesh context are rare. The few studies on the issue to date have all been confined to the garment workers only although studies indicate that women’s workforce participation in Bangladesh has increased across-the-board. Besides, none of these studies has made an attempt to control for the non-working women and/or applied any statistical technique to control for the effects of other pertinent determinants of women’s empowerment and welfare such as education, age, religion and place of living. This study overcomes these drawbacks and presents alternative assessments of the link between women’s workforce participation and empowerment on the basis of survey data from the two largest cities in Bangladesh. While the generic assessment indicates that women’s paid jobs have positive implications for women’s participation in decisions on fertility, children’s education and healthcare as well as their possession and control of resources, the econometric assessment negates most of these observations. Women’s education, on the other hand, appears to be more important than their participation in the labour force. The study underlines the fact that by omitting other relevant explanatory variables from the analysis, the previous studies might have overestimated the impact of women’s paid work on their empowerment. Among other things, the paper also highlights the importance of women’s job category, religion and regional differences for women’s empowerment.