196 resultados para Cognitive Style
Resumo:
Increased central adiposity and abnormalities in glucose tolerance preceding type 2 diabetes can have demonstrable negative effects on cognitive function, even in ostensibly healthy, middle-aged females. The potential for GL manipulations to modulate glycaemic response and cognitive function in type 2 diabetes and obesity merits further investigation..
Resumo:
This review is an output of the International Life Sciences Institute (ILSI) Europe Marker Initiative, which aims to identify evidence-based criteria for selecting adequate measures of nutrient effects on health through comprehensive literature review. Experts in cognitive and nutrition sciences examined the applicability of these proposed criteria to the field of cognition with respect to the various cognitive domains usually assessed to reflect brain or neurological function. This review covers cognitive domains important in the assessment of neuronal integrity and function, commonly used tests and their state of validation, and the application of the measures to studies of nutrition and nutritional intervention trials. The aim is to identify domain-specific cognitive tests that are sensitive to nutrient interventions and from which guidance can be provided to aid the application of selection criteria for choosing the most suitable tests for proposed nutritional intervention studies using cognitive outcomes. The material in this review serves as a background and guidance document for nutritionists, neuropsychologists, psychiatrists, and neurologists interested in assessing mental health in terms of cognitive test performance and for scientists intending to test the effects of food or food components on cognitive function.
Resumo:
Abnormalities in glucose tolerance such as type 2 diabetes can have demonstrable negative effects on a range of cognitive functions. However, there was no evidence that low GL breakfasts administered acutely could confer benefits for cognitive function (ClincalTrials.gov identifier, NCT01047813).
Resumo:
Low glycaemic index (GI) foods consumed at breakfast can enhance memory in comparison to high-GI foods; however, the impact of evening meal GI manipulations on cognition the following morning remains unexplored. Fourteen healthy males consumed a high-GI evening meal or a low-GI evening meal in a counterbalanced order on two separate evenings. Memory and attention were assessed before and after a high-GI breakfast the following morning. The high-GI evening meal elicited significantly higher evening glycaemic responses than the low-GI evening meal. Verbal recall was better the morning following the high-GI evening meal compared to after the low-GI evening meal. In summary, the GI of the evening meal was associated with memory performance the next day, suggesting a second meal cognitive effect. The present findings imply that an overnight fast may not be sufficient to control for previous nutritional consumption.
Resumo:
There is an increasing body of research investigating whether abnormal glucose tolerance is associated with cognitive impairments, the evidence from which is equivocal. A systematic search of the literature identified twenty-three studies which assessed either clinically defined impaired glucose tolerance (IGT) or variance in glucose tolerance within the clinically defined normal range (NGT). The findings suggest that poor glucose tolerance is associated with cognitive impairments, with decrements in verbal memory being most prevalent. However, the evidence for decrements in other domains was weak. The NGT studies report a stronger glucose tolerance-cognition association than the IGT studies, which is likely to be due to the greater number of glucose tolerance parameters and the more sensitive cognitive tests in the NGT studies compared to the IGT studies. It is also speculated that the negative cognitive impact of abnormalities in glucose tolerance increases with age, and that glucose consumption is most beneficial to individuals with poor glucose tolerance compared to individuals with normal glucose tolerance. The role of potential mechanisms are discussed.
Resumo:
Literature reviews suggest flavonoids, a sub-class of polyphenols, are beneficial for cognition. This is the first review examining the effect of consumption of all polyphenol groups on cognitive function. Inclusion criteria were polyphenol vs. control interventions and epidemiological studies with an objective measure of cognitive function. Participants were healthy or mildly cognitively impaired adults. Studies were excluded if clinical assessment or diagnosis of Alzheimer’s disease, dementia, or cognitive impairment was the sole measure of cognitive function, or if the polyphenol was present with potentially confounding compounds such as caffeine (e.g. tea studies) or Ginkgo Biloba. 28 studies were identified; 4 berry juice studies, 4 cocoa studies, 13 isoflavone supplement studies, 3 other supplement studies, and 4 epidemiological surveys. Overall, 16 studies reported cognitive benefits following polyphenol consumption. Evidence suggests that consuming additional polyphenols in the diet can lead to cognitive benefits, however, the observed effects were small. Declarative memory and particularly spatial memory appear most sensitive to polyphenol consumption and effects may differ depending on polyphenol source. Polyphenol berry fruit juice consumption was most beneficial for immediate verbal memory, whereas isoflavone based interventions were associated with significant improvements for delayed spatial memory and executive function. Comparison between studies was hampered by methodological inconsistencies. Hence, there was no clear evidence for an association between cognitive outcomes and polyphenol dose response, duration of intervention, or population studied. In conclusion, however, the findings do imply that polyphenol consumption has potential to benefit cognition both acutely and chronically.
Resumo:
Most developers of behavior change support systems (BCSS) employ ad hoc procedures in their designs. This paper presents a novel discussion concerning how analyzing the relationship between attitude toward target behavior, current behavior, and attitude toward change or maintaining behavior can facilitate the design of BCSS. We describe the three-dimensional relationships between attitude and behavior (3D-RAB) model and demonstrate how it can be used to categorize users, based on variations in levels of cognitive dissonance. The proposed model seeks to provide a method for analyzing the user context on the persuasive systems design model, and it is evaluated using existing BCSS. We identified that although designers seem to address the various cognitive states, this is not done purposefully, or in a methodical fashion, which implies that many existing applications are targeting users not considered at the design phase. As a result of this work, it is suggested that designers apply the 3D-RAB model in order to design solutions for targeted users.
Resumo:
Aim To develop a brief, parent-completed instrument (‘ERIC’) for detection of cognitive delay in 10-24 month-olds born preterm, or with low birth weight, or with perinatal complications, and to establish its diagnostic properties. Method Scores were collected from parents of 317 children meeting ≥1 inclusion criteria (birth weight <1500g; gestational age <34 completed weeks; 5-minute Apgar <7; presence of hypoxic-ischemic encephalopathy) and meeting no exclusion criteria. Children were assessed for cognitive delay using a criterion score on the Bayley Scales of Infant and Toddler Development Cognitive Scale III1 <80. Items were retained according to their individual associations with delay. Sensitivity, specificity, Positive and Negative Predictive Values were estimated and a truncated ERIC was developed for use <14 months. Results ERIC detected 17 out of 18 delayed children in the sample, with 94.4% sensitivity (95% CI [confidence interval] 83.9-100%), 76.9% specificity (72.1-81.7%), 19.8% positive predictive value (11.4-28.2%); 99.6% negative predictive value (98.7-100%); 4.09 likelihood ratio positive; and 0.07 likelihood ratio negative; the associated Area under the Curve was .909 (.829-.960). Interpretation ERIC has potential value as a quickly-administered diagnostic instrument for the absence of early cognitive delay in preterm or premature infants of 10-24 months, and as a screen for cognitive delay. Further research may be needed before ERIC can be recommended for wide-scale use.
Resumo:
The BBC television drama anthology The Wednesday Play, broadcast from 1964-70 on the BBC1 channel, was high-profile and often controversial in its time and has since been central to accounts of British television’s ‘golden age’. This article demonstrates that production technologies and methods were more diverse at that time than is now acknowledged, and that The Wednesday Play dramas drew both approving but also very critical responses from contemporary viewers and professional reviewers. This article analyses the ways that the physical spaces of production for different dramas in the series, and the different technologies of shooting and recording that were adopted in these production spaces, are associated with but do not determine aesthetic style. The adoption of single-camera location filming rather than the established production method of multi-camera studio videotaping in some of the dramas in the series has been important to The Wednesday Play’s significance, but each production method was used in different ways. The dramas drew their dramatic forms and aesthetic emphases from both theatre and cinema, as well as connecting with debates about the nature of drama for television. Institutional and regulatory frameworks such as control over staff working away from base, budgetary considerations and union agreements also impacted on decisions about how programmes were made. The article makes use of records from the BBC Written Archives Centre, as well as published scholarship. By placing The Wednesday Play in a range of overlapping historical contexts, its identity can be understood as transitional, differentiated and contested.
Resumo:
In recent years, research into the impact of genetic abnormalities on cognitive development, including language, has become recognized for its potential to make valuable contributions to our understanding of the brain–behaviour relationships underlying language acquisition as well as to understanding the cognitive architecture of the human mind. The publication of Fodor’s ( 1983 ) book The Modularity of Mind has had a profound impact on the study of language and the cognitive architecture of the human mind. Its central claim is that many of the processes involved in comprehension are undertaken by special brain systems termed ‘modules’. This domain specificity of language or modularity has become a fundamental feature that differentiates competing theories and accounts of language acquisition (Fodor 1983 , 1985 ; Levy 1994 ; Karmiloff-Smith 1998 ). However, although the fact that the adult brain is modularized is hardly disputed, there are different views of how brain regions become specialized for specific functions. A question of some interest to theorists is whether the human brain is modularized from the outset (nativist view) or whether these distinct brain regions develop as a result of biological maturation and environmental input (neuroconstructivist view). One source of insight into these issues has been the study of developmental disorders, and in particular genetic syndromes, such as Williams syndrome (WS) and Down syndrome (DS). Because of their uneven profiles characterized by dissociations of different cognitive skills, these syndromes can help us address theoretically significant questions. Investigations into the linguistic and cognitive profiles of individuals with these genetic abnormalities have been used as evidence to advance theoretical views about innate modularity and the cognitive architecture of the human mind. The present chapter will be organized as follows. To begin, two different theoretical proposals in the modularity debate will be presented. Then studies of linguistic abilities in WS and in DS will be reviewed. Here, the emphasis will be mainly on WS due to the fact that theoretical debates have focused primarily on WS, there is a larger body of literature on WS, and DS subjects have typically been used for the purposes of comparison. Finally, the modularity debate will be revisited in light of the literature review of both WS and DS. Conclusions will be drawn regarding the contribution of these two genetic syndromes to the issue of cognitive modularity, and in particular innate modularity.
Resumo:
As the fidelity of virtual environments (VE) continues to increase, the possibility of using them as training platforms is becoming increasingly realistic for a variety of application domains, including military and emergency personnel training. In the past, there was much debate on whether the acquisition and subsequent transfer of spatial knowledge from VEs to the real world is possible, or whether the differences in medium during training would essentially be an obstacle to truly learning geometric space. In this paper, the authors present various cognitive and environmental factors that not only contribute to this process, but also interact with each other to a certain degree, leading to a variable exposure time requirement in order for the process of spatial knowledge acquisition (SKA) to occur. The cognitive factors that the authors discuss include a variety of individual user differences such as: knowledge and experience; cognitive gender differences; aptitude and spatial orientation skill; and finally, cognitive styles. Environmental factors discussed include: Size, Spatial layout complexity and landmark distribution. It may seem obvious that since every individual's brain is unique - not only through experience, but also through genetic predisposition that a one size fits all approach to training would be illogical. Furthermore, considering that various cognitive differences may further emerge when a certain stimulus is present (e.g. complex environmental space), it would make even more sense to understand how these factors can impact spatial memory, and to try to adapt the training session by providing visual/auditory cues as well as by changing the exposure time requirements for each individual. The impact of this research domain is important to VE training in general, however within service and military domains, guaranteeing appropriate spatial training is critical in order to ensure that disorientation does not occur in a life or death scenario.
Resumo:
The present study investigated whether developmental changes in cognitive control may underlie improvements of time-based prospective memory. Five-, 7-, 9-, and 11-year-olds (N = 166) completed a driving simulation task (ongoing task) in which they had to refuel their vehicle at specific points in time (PM task). The availability of cognitive control resources was experimentally manipulated by imposing a secondary task that required divided attention. Children completed the driving simulation task both in a full attention condition and a divided attention condition where they had to carry out a secondary task. Results revealed that older children performed better than younger children on the ongoing task and PM task. Children performed worse on the ongoing and PM tasks in the divided attention condition compared to the full attention condition. With respect to time monitoring in the final interval prior to the PM target, divided attention interacted with age such that older children’s time monitoring was more negatively affected by the secondary task compared to younger children. Results are discussed in terms of developmental shifts from reactive to proactive monitoring strategies.
Resumo:
Although reviews of the association between polyphenol intake and cognition exist, research examining the cognitive effects of fruit, vegetable, and juice consumption across epidemiological and intervention studies has not been previously examined. Critical inclusion criteria were human participants, a measure of fruit, vegetable, or 100% juice consumption, an objective measure of cognitive function, and a clinical diagnosis of neuropsychological disease. Studies were excluded if consumption of fruit, vegetables, or juice was not assessed in isolation from other foods groups, or if there was no statistical control for education or IQ. Seventeen of 19 epidemiological studies and 3 of 6 intervention studies reported significant benefits of fruit, vegetable, or juice consumption for cognitive performance. The data suggest that chronic consumption of fruits, vegetables, and juices is beneficial for cognition in healthy older adults. The limited data from acute interventions indicate that consumption of fruit juices can have immediate benefits for memory function in adults with mild cognitive impairment; however, as of yet, acute benefits have not been observed in healthy adults. Conclusions regarding an optimum dietary intake for fruits, vegetables, and juices are difficult to quantify because of substantial heterogeneity in the categorization of consumption of these foods.
Resumo:
A hoard found in Southbroom, Devizes in 1714 contained a group of copper-alloy figurines executed in both classical and local styles and depicting deities belonging to the Graeco-Roman and Gallo-Roman pantheons. The deities in a local style appear to form part of a larger tradition of figurines, predominantly found in the South-West, which are characterised both by a similar artistic style and by the use of Gallo-Roman symbolism and deities, such as the torc, ram-horned snake, carnivorous dog and Sucellus. The unique composition of the hoard in comparison with other hoards of similar date provides insights into the beliefs of Roman Britain.
Resumo:
In a previous article, I wrote a brief piece on how to enhance papers that have been published at one of the IEEE Consumer Electronics (CE) Society conferences to create papers that can be considered for publishing in IEEE Transactions on Consumer Electronics (T-CE) [1]. Basically, it included some hints and tips to enhance a conference paper into what is required for a full archival journal paper and not fall foul of self-plagiarism. This article focuses on writing original papers specifically for T-CE. After three years as the journal’s editor-in-chief (EiC), a previous eight years on the editorial board, and having reviewed some 4,000 T-CE papers, I decided to write this article to archive and detail for prospective authors what I have learned over this time. Of course, there are numerous articles on writing good papers—some are really useful [2], but they do not address the specific issues of writing for a journal whose topic (scope) is not widely understood or, indeed, is often misunderstood.