14 resultados para Cognitive performance

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to examine the effects of acute active dehydration by exercise in a hot, humid environment on cognitive performance. Our findings were inconclusive compared to previous studies that reported decreased cognitive performance in manual laborers and military personnel working in extreme environmental conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To evaluate the impact of alcohol use, which is widespread in human immunodeficiency virus (HIV)+ individuals, on highly active antiretroviral therapy (HAART)-associated immune and cognitive improvements and the relationship between those two responses. Methods: In a case-control longitudinal study, thymic volume, cognition, and immune responses were evaluated at baseline and after 6 months therapy in HIV+ and HIV- controls. Cognitive performance was evaluated using the HIV Dementia Score (HDS) and the California Verbal Learning Test (CVLT). Results: Prior to HAART, thymic volume varied considerably from 2.7 to 29.3 cm3 (11 ± 7.2 cm3). Thymic volume at baseline showed a significantly inverse correlation with the patient’s number of years of drinking (r2 = 0.207; p < 0.01), as well as HDS and the CVLT scores in both HIV-infected (r2 = 0.37, p = 0.03) and noninfected (r2 = 0.8, p = 0.01). HIV-infected individuals with a small thymic volume scored in the demented range, as compared with those with a larger thymus (7 ± 2.7 vs. 12 ± 2.3, p = 0.005). After HAART, light/moderate drinkers exhibited thymus size twice that of heavy drinkers (14.8 ± 10.4 vs. 6.9 ± 3.3 cm3). Conclusions: HAART-associated increases of thymus volume appear to be negatively affected by alcohol consumption and significantly related to their cognitive status. This result could have important clinical implications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Examining factors that affect vitamin D status in the fast-growing elderly population of Miami-Dade, Florida, is needed. Vitamin D deficiency in older adults has been linked to correlates of disability, including falls and fractures, and cardiovascular disease. The purpose of this study was to determine the proportion of vitamin D insufficient individuals and their relationship with vitamin D insufficiency in older adults (n=97) living in Miami-Dade. We evaluated the association between vitamin D status and 1) dual task physical performance to understand the link between vitamin D and cognition in the context of mobility; and 2) cardiometabolic risk, measured by galvanic skin response, pulse oximetry, and blood pressure to create a composite score based on autonomic nervous system and endothelial function. Participants completed baseline assessments that included serum levels of vitamin D, anthropometrics, body composition, dual task physical performance and cardiometabolic risk. Surveys to evaluate vitamin D intake, sun exposure, physical activity, and depressive symptoms were completed. Spearman’s correlations, independent t-tests, paired t-tests, repeated measures ANOVAs, and multiple logistic and linear regressions were used to examine the relationship of vitamin D insufficiency (25(OH)D /ml) and sufficiency (25(OH)D ≥30 ng/ml) with determinants of vitamin D status, dual task physical performance variables and cardiometabolic risk scores. Although the proportion of vitamin D insufficient individuals was lower when compared to the prevalance of the general United States elderly population, it was still common in healthy community-dwelling older adults living in Miami-Dade County, especially among Hispanics. Factors that affected skin synthesis (ethnicity, and sun exposure), and bioavailability/metabolism (obesity) were significant predictors of vitamin D status. Vitamin D insufficiency was not significantly correlated with worse dual task physical performance; however, cognitive performance was worse in the vitamin D insufficient group. Our results suggest a relationship of vitamin D insufficiency with executive dysfunction, and support an association with cardiometabolic risk using an innovative electro-sensor complex, possibly by modulating autonomic nervous system activity and vascular function, thus affecting cardiac performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

For years, researchers and human resources specialists have been searching for predictors of performance as well as for relevant performance dimensions (Barrick & Mount, 1991; Borman & Motowidlo, 1993; Campbell, 1990; Viswesvaran et al., 1996). In 1993, Borman and Motowidlo provided a framework by which traditional predictors such as cognitive ability and the Big Five personality factors predicted two different facets of performance: 1) task performance and 2) contextual performance. A meta-analysis was conducted to assess the validity of this model as well as that of other modified models. The relationships between predictors such as cognitive ability and personality variables and the two outcome variables were assessed. It was determined that even though the two facets of performance may be conceptually different, empirically they overlapped substantially (p= .75). Finally, results show that there is some evidence for cognitive ability as a predictor of both task and contextual performance and conscientiousness as a predictor of both task and contextual performance. The possible mediation of predictor-- criterion relationships was also assessed. The relationship between cognitive ability and contextual performance vanished when task performance was controlled.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It was hypothesized that making a commute elevates blood pressure, causes negative affect, reduces frustration tolerance, and impairs performance on simple and complex cognitive tasks. This hypothesis was tested by varying choice and type of commute in an experiment in which 168 volunteers were randomly assigned to one of six experimental conditions. The behavior of subjects who drove 30 miles or rode on a bus for the same distance were compared with the reactions of students who did not commute. Multivariate analyses of variance indicated that subjects who made the commute showed lower frustration tolerance and deficits on complex cognitive task performance. Commuting also raised pulse and systolic blood pressure. Multivariate analyses of covariance (MANCOVA) were performed in an effort to identify physiological and emotional reactions that may mediate these relations. No mediational relationships were uncovered. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current study applied classic cognitive capacity models to examine the effect of cognitive load on deception. The study also examined whether the manipulation of cognitive load would result in the magnification of differences between liars and truth-tellers. In the first study, 87 participants engaged in videotaped interviews while being either deceptive or truthful about a target event. Some participants engaged in a concurrent secondary task while being interviewed. Performance on the secondary task was measured. As expected, truth tellers performed better on secondary task items than liars as evidenced by higher accuracy rates. These results confirm the long held assumption that being deceptive is more cognitively demanding than being truthful. In the second part of the study, the videotaped interviews of both liars and truth-tellers were shown to 69 observers. After watching the interviews, observers were asked to make a veracity judgment for each participant. Observers made more accurate veracity judgments when viewing participants who engaged in a concurrent secondary task than when viewing those who did not. Observers also indicated that participants who engaged in a concurrent secondary task appeared to think harder than participants who did not. This study provides evidence that engaging in deception is more cognitively demanding than telling the truth. As hypothesized, having participants engage in a concurrent secondary task led to the magnification of differences between liars and truth tellers. This magnification of differences led to more accurate veracity rates in a second group of observers. The implications for deception detection are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to determine the degree to which the Big-Five personality taxonomy, as represented by the Minnesota Multiphasic Personality Inventory (MMPI), California Psychological Inventory (CPI), and Inwald Personality Inventory (IPI) scales, predicted a variety of police officer job performance criteria. Data were collected archivally for 270 sworn police officers from a large Southeastern municipality. Predictive data consisted of scores on the MMPI, CPI, and IPI scales as grouped in terms of the Big-Five factors. The overall score on the Wonderlic was included in order to assess criterion variance accounted for by cognitive ability. Additionally, a psychologist's overall rating of predicted job fit was utilized to assess the variance accounted for by a psychological interview. Criterion data consisted of supervisory ratings of overall job performance, State Examination scores, police academy grades, and termination. Based on the literature, it was hypothesized that officers who are higher on Extroversion, Conscientiousness, Agreeableness, Openness to Experience, and lower on Neuroticism, otherwise known as the Big-Five factors, would outperform their peers across a variety of job performance criteria. Additionally, it was hypothesized that police officers who are higher in cognitive ability and masculinity, and lower in mania would also outperform their counterparts. Results indicated that many of the Big-Five factors, namely, Neuroticism, Conscientiousness, Agreeableness, and Openness to Experience, were predictive of several of the job performance criteria. Such findings imply that the Big-Five is a useful predictor of police officer job performance. Study limitations and implications for future research are discussed. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technological advancements and the ever-evolving demands of a global marketplace may have changed the way in which training is designed, implemented, and even managed, but the ultimate goal of organizational training programs remains the same: to facilitate learning of a knowledge, skill, or other outcome that will yield improvement in employee performance on the job and within the organization (Colquitt, LePine, & Noe, 2000; Tannenbaum & Yukl, 1992). Studies of organizational training have suggested medium to large effect sizes for the impact of training on employee learning (e.g., Arthur, Bennett, Edens, & Bell, 2003; Burke & Day, 1986). However, learning may be differentially affected by such factors as the (1) level and type of preparation provided prior to training, (2) targeted learning outcome, (3) training methods employed, and (4) content and goals of training (e.g., Baldwin & Ford, 1988). A variety of pre-training interventions have been identified as having the potential to enhance learning from training and practice (Cannon-Bowers, Rhodenizer, Salas, & Bowers, 1998). Numerous individual studies have been conducted examining the impact of one or more of these pre-training interventions on learning. ^ I conducted a meta-analytic examination of the effect of these pre-training interventions on cognitive, skill, and affective learning. Results compiled from 359 independent studies (total N = 37,038) reveal consistent positive effects for the role of pre-training interventions in enhancing learning. In most cases, the provision of a pre-training intervention explained approximately 5–10% of the variance in learning, and in some cases, explained up to 40–50% of variance in learning. Overall attentional advice and meta-cognitive strategies (as compared with advance organizers, goal orientation, and preparatory information) seem to result in the most consistent learning gains. Discussion focuses on the most beneficial match between an intervention and the learning outcome of interest, the most effective format of these interventions, and the most appropriate circumstances under which these interventions should be utilized. Also highlighted are the implications of these results for practice, as well as propositions for important avenues for future research. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The current study applied classic cognitive capacity models to examine the effect of cognitive load on deception. The study also examined whether the manipulation of cognitive load would result in the magnification of differences between liars and truth-tellers. In the first study, 87 participants engaged in videotaped interviews while being either deceptive or truthful about a target event. Some participants engaged in a concurrent secondary task while being interviewed. Performance on the secondary task was measured. As expected, truth tellers performed better on secondary task items than liars as evidenced by higher accuracy rates. These results confirm the long held assumption that being deceptive is more cognitively demanding than being truthful. In the second part of the study, the videotaped interviews of both liars and truth-tellers were shown to 69 observers. After watching the interviews, observers were asked to make a veracity judgment for each participant. Observers made more accurate veracity judgments when viewing participants who engaged in a concurrent secondary task than when viewing those who did not. Observers also indicated that participants who engaged in a concurrent secondary task appeared to think harder than participants who did not. This study provides evidence that engaging in deception is more cognitively demanding than telling the truth. As hypothesized, having participants engage in a concurrent secondary task led to the magnification of differences between liars and truth tellers. This magnification of differences led to more accurate veracity rates in a second group of observers. The implications for deception detection are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.