848 resultados para Cognitive Effort
Resumo:
Purpose. Whereas many previous studies have identified the association between sustained near work and myopia, few have assessed the influence of concomitant levels of cognitive effort. This study investigates the effect of cognitive effort on near-work induced transient myopia (NITM). Methods. Subjects comprised of six early onset myopes (EOM; mean age 23.7 yrs; mean onset 10.8 yrs), six late-onset myopes (LOM; mean age 23.2 yrs; mean onset 20.0 yrs) and six emmetropes (EMM; mean age 23.8 yrs). Dynamic, monocular, ocular accommodation was measured with the Shin-Nippon SRW-5000 autorefractor. Subjects engaged passively or actively in a 5 minute arithmetic sum checking task presented monocularly on an LCD monitor via a Badal optical system. In all conditions the task was initially located at near (4.50 D) and immediately following the task instantaneously changed to far (0.00 D) for a further 5 minutes. The combinations of active (A) and passive (P) cognition were randomly allocated as P:P; A:P; A:A; P:A. Results. For the initial near task, LOMs were shown to have a significantly less accurate accommodative response than either EOMs or EMMs (p < 0.001). For the far task, post hoc analyses for refraction identified EOMs as demonstrating significant NITM compared to LOMs (p < 0.05), who in turn showed greater NITM than EMMs (p < 0.001). The data show that for EOMs the level of cognitive activity operating during the near and far tasks determines the persistence of NITM; persistence being maximal when active cognition at near is followed by passive cognition at far. Conclusions. Compared with EMMs, EOMs and LOMs are particularly susceptible to NITM such that sustained near vision reduces subsequent accommodative accuracy for far vision. It is speculated that the marked NITM found in EOM may be a consequence of the crystalline lens thinning shown to be a developmental feature of EOM. Whereas the role of small amounts of retinal defocus in myopigenesis remains equivocal, the results show that account needs to be taken of cognitive demand in assessing phenomena such as NITM.
Resumo:
Several definitions exist that offer to identify the boundaries between languages and dialects, yet these distinctions are inconsistent and are often as political as they are linguistic (Chambers & Trudgill, 1998). A different perspective is offered in this thesis, by investigating how closely related linguistic varieties are represented in the brain and whether they engender similar cognitive effects as is often reported for bilingual speakers of recognised independent languages, based on the principles of Green’s (1998) model of bilingual language control. Study 1 investigated whether bidialectal speakers exhibit similar benefits in non-linguistic inhibitory control as a result of the maintenance and use of two dialects, as has been proposed for bilinguals who regularly employ inhibitory control mechanisms, in order to suppress one language while speaking the other. The results revealed virtually identical performance across all monolingual, bidialectal and bilingual participant groups, thereby not just failing to find a cognitive control advantage in bidialectal speakers over monodialectals/monolinguals, but also in bilinguals; adding to a growing body of evidence which challenges this bilingual advantage in non-linguistic inhibitory control. Study 2 investigated the cognitive representation of dialects using an adaptation of a Language Switching Paradigm to determine if the effort required to switch between dialects is similar to the effort required to switch between languages. The results closely replicated what is typically shown for bilinguals: Bidialectal speakers exhibited a symmetrical switch cost like balanced bilinguals while monodialectal speakers, who were taught to use the dialect words before the experiment, showed the asymmetrical switch cost typically displayed by second language learners. These findings augment Green’s (1998) model by suggesting that words from different dialects are also tagged in the mental lexicon, just like words from different languages, and as a consequence, it takes cognitive effort to switch between these mental settings. Study 3 explored an additional explanation for language switching costs by investigating whether changes in articulatory settings when switching between different linguistic varieties could - at least in part – be responsible for these previously reported switching costs. Using a paradigm which required participants to switch between using different articulatory settings, e.g. glottal stops/aspirated /t/ and whispers/normal phonation, the results also demonstrated the presence of switch costs, suggesting that switching between linguistic varieties has a motor task-switching component which is independent of representations in the mental lexicon. Finally, Study 4 investigated how much exposure is needed to be able to distinguish between different varieties using two novel language categorisation tasks which compared German vs Russian cognates, and Standard Scottish English vs Dundonian Scots cognates. The results showed that even a small amount of exposure (i.e. a couple of days’ worth) is required to enable listeners to distinguish between different languages, dialects or accents based on general phonetic and phonological characteristics, suggesting that the general sound template of a language variety can be represented before exact lexical representations have been formed. Overall, these results show that bidialectal use of typologically closely related linguistic varieties employs similar cognitive mechanisms as bilingual language use. This thesis is the first to explore the cognitive representations and mechanisms that underpin the use of typologically closely related varieties. It offers a few novel insights and serves as the starting point for a research agenda that can yield a more fine-grained understanding of the cognitive mechanisms that may operate when speakers use closely related varieties. In doing so, it urges caution when making assumptions about differences in the mechanisms used by individuals commonly categorised as monolinguals, to avoid potentially confounding any comparisons made with bilinguals.
Resumo:
In this paper we present a new neuroeconomics model for decision-making applied to the Attention-Deficit/Hyperactivity Disorder (ADHD). The model is based on the hypothesis that decision-making is dependent on the evaluation of expected rewards and risks assessed simultaneously in two decision spaces: the personal (PDS) and the interpersonal emotional spaces (IDS). Motivation to act is triggered by necessities identified in PDS or IDS. The adequacy of an action in fulfilling a given necessity is assumed to be dependent on the expected reward and risk evaluated in the decision spaces. Conflict generated by expected reward and risk influences the easiness (cognitive effort) and the future perspective of the decision-making. Finally, the willingness (not) to act is proposed to be a function of the expected reward (or risk), adequacy, easiness and future perspective. The two most frequent clinical forms are ADHD hyperactive (AD/HDhyp) and ADHD inattentive (AD/HDdin). AD/HDhyp behavior is hypothesized to be a consequence of experiencing high rewarding expectancies for short periods of time, low risk evaluation, and short future perspective for decision-making. AD/HDin is hypothesized to be a consequence of experiencing high rewarding expectancies for long periods of time, low risk evaluation, and long future perspective for decision-making.
Resumo:
Previous research has demonstrated superior learning by participants presented with augmented task information retroactively versus proactively (Patterson & Lee, 2008; 2010). Theoretical explanations of these findings are related to the cognitive effort invested by participants during motor skill acquisition. The present study extended previous research by utilizing the physiological index, power spectral analysis of heart rate variability, previously shown to be sensitive to the degree of cognitive effort invested during the performance of a motor task (e.g., increase cognitive effort results in increased LF/HF ratio). Participants were required to learn 18 different key-pressing sequences. As expected, the proactive condition demonstrated superior RS during acquisition, with the retroactive condition demonstrating superior RS during retention. Measures of LF/HF ratio indicated the retroactive participants were investing significantly less cognitive effort in the retention period compared to the proactive participants (p< .05) as a function of learning.
Resumo:
This study used three Oculomotor Delayed Response (ODR) tasks to investigate the unique cognitive demands during the delay period. Changes in alpha power were used to index cognitive efforts during the delay period. Continuous EEGs from 25 healthy young adults (18-34 years) were recorded using dense electrode array. The data was analyzed by 6-cycle Morlet wavelet decompositions in the frequency range of 2-30 Hz to create time- frequency decompositions for four midline electrode sites. The 99% confidence intervals using the bootstrapped 20% trimmed mean of the 10 Hz frequency were used to examine the differences among conditions. Compared to two Memory conditions (Match and Non-Match), Control condition yielded significant differences in all frequencies over the entire trial period, suggesting a cognitive state difference. Compared to Match condition, the Non–Match condition had lower alpha activity during the delay period at each midline electrode site reflecting the higher cognitive effort required.
Resumo:
L’évolution technologique et l'accroissement de la population vieillissante sont deux tendances majeures de la dernière décennie. Durant cette période, la prolifération ubiquitaire de la téléphonie mobile a changé les habitudes de communication des gens. Le changement constant des appareils téléphoniques portatifs, l'augmentation des fonctions, la diversité iconographique, la variété des interfaces et la complexité de navigation exigent aujourd’hui non seulement plus de temps d'adaptation et d’apprentissage, mais représentent aussi un effort cognitif important. Les technologies d'information et de communication (TIC) sont devenues des outils incontournables de la vie moderne. Pour les personnes âgées, cet univers en perpétuelle mutation avec ces nouveaux appareils représente un obstacle à l’accès à l’information et contribue ainsi au gap générationnel. Le manque de référence et de soutien et les déficiences physiques ou cognitives, que certaines personnes développent en vieillissant, rendent l'usage de ce type d’objet souvent impossible. Pourtant, les produits intelligents plus accessibles, tant au niveau physique que cognitif sont une réelle nécessité au sein de notre société moderne permettant aux personnes âgées de vivre de manière plus autonome et « connectée ». Cette recherche a pour but d'exposer les défis d'usage des téléphones portables existants et d'identifier en particulier les problèmes d’usage que les personnes âgées manifestent. L’étude vise la tranche de population qui est peu habituée aux technologies de communications qui ne ciblent le plus souvent que les plus jeunes et les professionnels. C’est en regardant les habitudes d’usage, que la recherche qualitative nous permettra d’établir un profil des personnes âgées par rapport au TIC et de mieux comprendre les défis liés à la perception, compréhension et l’usage des interfaces de téléphones portables.
Resumo:
Le développement du logiciel actuel doit faire face de plus en plus à la complexité de programmes gigantesques, élaborés et maintenus par de grandes équipes réparties dans divers lieux. Dans ses tâches régulières, chaque intervenant peut avoir à répondre à des questions variées en tirant des informations de sources diverses. Pour améliorer le rendement global du développement, nous proposons d'intégrer dans un IDE populaire (Eclipse) notre nouvel outil de visualisation (VERSO) qui calcule, organise, affiche et permet de naviguer dans les informations de façon cohérente, efficace et intuitive, afin de bénéficier du système visuel humain dans l'exploration de données variées. Nous proposons une structuration des informations selon trois axes : (1) le contexte (qualité, contrôle de version, bogues, etc.) détermine le type des informations ; (2) le niveau de granularité (ligne de code, méthode, classe, paquetage) dérive les informations au niveau de détails adéquat ; et (3) l'évolution extrait les informations de la version du logiciel désirée. Chaque vue du logiciel correspond à une coordonnée discrète selon ces trois axes, et nous portons une attention toute particulière à la cohérence en naviguant entre des vues adjacentes seulement, et ce, afin de diminuer la charge cognitive de recherches pour répondre aux questions des utilisateurs. Deux expériences valident l'intérêt de notre approche intégrée dans des tâches représentatives. Elles permettent de croire qu'un accès à diverses informations présentées de façon graphique et cohérente devrait grandement aider le développement du logiciel contemporain.
Resumo:
De nos jours, les applications de grande taille sont développées à l’aide de nom- breux cadres d’applications (frameworks) et intergiciels (middleware). L’utilisation ex- cessive d’objets temporaires est un problème de performance commun à ces applications. Ce problème est appelé “object churn”. Identifier et comprendre des sources d’“object churn” est une tâche difficile et laborieuse, en dépit des récentes avancées dans les tech- niques d’analyse automatiques. Nous présentons une approche visuelle interactive conçue pour aider les développeurs à explorer rapidement et intuitivement le comportement de leurs applications afin de trouver les sources d’“object churn”. Nous avons implémenté cette technique dans Vasco, une nouvelle plate-forme flexible. Vasco se concentre sur trois principaux axes de con- ception. Premièrement, les données à visualiser sont récupérées dans les traces d’exécu- tion et analysées afin de calculer et de garder seulement celles nécessaires à la recherche des sources d’“object churn”. Ainsi, des programmes de grande taille peuvent être vi- sualisés tout en gardant une représentation claire et compréhensible. Deuxièmement, l’utilisation d’une représentation intuitive permet de minimiser l’effort cognitif requis par la tâche de visualisation. Finalement, la fluidité des transitions et interactions permet aux utilisateurs de garder des informations sur les actions accomplies. Nous démontrons l’efficacité de l’approche par l’identification de sources d’“object churn” dans trois ap- plications utilisant intensivement des cadres d’applications framework-intensive, inclu- ant un système commercial.
Resumo:
This paper investigates the effect of accountability-the expectation on the side of the decision maker of having to justify his/her decisions to somebody else-on loss aversion. Loss aversion is commonly thought to be the strongest component of risk aversion. Accountability is found to reduce the bias of loss aversion. This effect is explained by the higher cognitive effort induced by accountability, which triggers a rational check on emotional reactions at the base of loss aversion, leading to a reduction of the latter. Connections to dual-processing models are discussed.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Educação Escolar - FCLAR
Resumo:
This study examined a new type of cognitive intervention. For four weeks, participants (ages 65 to 82) were instructed in professional acting techniques, followed by rehearsal and performance of theatrical scenes. Although the training was not targeted in any way to the tasks used in pre- and post-testing, participants produced significantly higher recall and recognition scores after the intervention. It is suggested that the cognitive effort involved in analyzing and adopting theatrical characters' motivations (and then experiencing those characters' mental/emotional states during performance) is responsible for the observed improvement. A secondary strand of this study showed that participants who were given annotated scripts in which the implied goals of the characters were made explicit demonstrated significantly faster access to the stored material, as measured by a computer latency task.
Resumo:
The theories of Moscovici (1980) and Nemeth (1986) concerning the cognitive processes underlying minority influence are examined in an argument generation paradigm. While Moscovici (1980) argues that minority influence increases the generation of arguments for and against the minority position, Nemeth (1986) proposes that minorities induce divergent thinking which leads to the generation of a wider range of arguments which are more original. In the first study, subjects read a minority text and then generated arguments concerning the minority issue within a specified time. The second study was similar to the first and included a condition where minority influence followed partial sensory deprivation (being placed in a dark, soundproof room for 45 minutes) which was predicted to decrease cognitive effort. Contrary to Moscovici, in neither study was there evidence that a minority led to more arguments being generated compared to a control condition (no influence). However, in one study, a minority led to more arguments being generated in the minority than in the majority direction. However, as predicted by Nemeth, in both studies a minority resulted in a wider range of arguments being generated than those proposed in the minority's message and these were rated by independent judges as being more original. Finally, as predicted, partial sensory deprivation led to a narrower range of arguments which were focused more upon issues raised in the minority text.
Resumo:
We set out to distinguish level 1 (VPT-1) and level 2 (VPT-2) perspective taking with respect to the embodied nature of the underlying processes as well as to investigate their dependence or independence of response modality (motor vs. verbal). While VPT-1 reflects understanding of what lies within someone else’s line of sight, VPT-2 involves mentally adopting someone else’s spatial point of view. Perspective taking is a high-level conscious and deliberate mental transformation that is crucially placed at the convergence of perception, mental imagery, communication, and even theory of mind in the case of VPT-2. The differences between VPT-1 and VPT-2 mark a qualitative boundary between humans and apes, with the latter being capable of VPT-1 but not of VPT-2. However, our recent data showed that VPT-2 is best conceptualized as the deliberate simulation or emulation of a movement, thus underpinning its embodied origins. In the work presented here we compared VPT-2 to VPT-1 and found that VPT-1 is not at all, or very differently embodied. In a second experiment we replicated the qualitatively different patterns for VPT-1 and VPT-2 with verbal responses that employed spatial prepositions. We conclude that VPT-1 is the cognitive process that subserves verbal localizations using “in front” and “behind,” while VPT-2 subserves “left” and “right” from a perspective other than the egocentric. We further conclude that both processes are grounded and situated, but only VPT-2 is embodied in the form of a deliberate movement simulation that increases in mental effort with distance and incongruent proprioception. The differences in cognitive effort predict differences in the use of the associated prepositions. Our findings, therefore, shed light on the situated, grounded and embodied basis of spatial localizations and on the psychology of their use.
Resumo:
Our jury system is predicated upon the expectation that jurors engage in systematic processing when considering evidence and making decisions. They are instructed to interpret facts and apply the appropriate law in a fair, dispassionate manner, free of all bias, including that of emotion. However, emotions containing an element of certainty (e.g., anger and happiness, which require little cognitive effort in determining their source) can often lead people to engage in superficial, heuristic-based processing. Compare this to uncertain emotions (e.g., hope and fear, which require people to seek out explanations for their emotional arousal), which instead has the potential to lead them to engage in deeper, more systematic processing. The purpose of the current research is in part to confirm past research (Tiedens & Linton, 2001; Semmler & Brewer, 2002) that uncertain emotions (like fear) can influence decision-making towards a more systematic style of processing, whereas more certain emotional states (like anger) will lead to a more heuristic style of processing. Studies One, Two, and Three build upon this prior research with the goal of improving methodological rigor through the use of film clips to reliably induce emotions, with awareness of testimonial details serving as measures of processing style. The ultimate objective of the current research was to explore this effect in Study Four by inducing either fear, anger, or neutral emotion in mock jurors, half of whom then followed along with a trial transcript featuring eight testimonial inconsistencies, while the other participants followed along with an error-free version of the same transcript. Overall rates of detection for these inconsistencies was expected to be higher for the uncertain/fearful participants due to their more effortful processing compared to certain/angry participants. These expectations were not fulfilled, with significant main effects only for the transcript version (with or without inconsistencies) on overall inconsistency detection rates. There are a number of plausible explanations for these results, so further investigation is needed.