48 resultados para Neuropsychological Test-performance
em University of Queensland eSpace - Australia
Resumo:
Concussion severity grades according to the Cantu, Colorado Medical Society, and American Academy of Neurology systems were not clearly related to the presence or duration of impaired neuropsychological test performance in 21 professional rugby league athletes. The use of concussion severity guidelines and neuropsychological testing to assist return to play decisions requires further investigation.
Resumo:
Objective: This paper compares four techniques used to assess change in neuropsychological test scores before and after coronary artery bypass graft surgery (CABG), and includes a rationale for the classification of a patient as overall impaired. Methods: A total of 55 patients were tested before and after surgery on the MicroCog neuropsychological test battery. A matched control group underwent the same testing regime to generate test–retest reliabilities and practice effects. Two techniques designed to assess statistical change were used: the Reliable Change Index (RCI), modified for practice, and the Standardised Regression-based (SRB) technique. These were compared against two fixed cutoff techniques (standard deviation and 20% change methods). Results: The incidence of decline across test scores varied markedly depending on which technique was used to describe change. The SRB method identified more patients as declined on most measures. In comparison, the two fixed cutoff techniques displayed relatively reduced sensitivity in the detection of change. Conclusions: Overall change in an individual can be described provided the investigators choose a rational cutoff based on likely spread of scores due to chance. A cutoff value of ≥20% of test scores used provided acceptable probability based on the number of tests commonly encountered. Investigators must also choose a test battery that minimises shared variance among test scores.
Resumo:
Aims: To compare the performance of schizophrenia, mania and well control groups on tests sensitive to impaired executive ability, and to assess the within-group stability of these measures across the acute and subacute phases of psychoses. Method: Recently admitted patients with schizophrenia (n=36), mania (n=18) and a well control group (n=20) were assessed on two occasions separated by 4 weeks. Tests included: the Controlled Oral Word Association Test, the Stroop Test, the Wisconsin Card Sort Test, and the Trail Making Test. Results: The two patient groups were significantly impaired on the Stroop Test at both time points compared to the control group. Significant group differences were also found for the Trail Making Test at Time 1 and for the Wisconsin Card Sort Test at Time 2. When controlled for practice effect, significant improvements over time were found on the Stroop and Trail Making tests in the schizophrenia group and on WCST Categories Achieved in the mania group. Discussion: Compared to controls, the patient groups were impaired on measures related to executive ability. The pattern of improvement on test scores between the acute and subacute phases differed between patients with schizophrenia versus patients with mania. (C) 1997 Elsevier Science B.V.
Resumo:
This study aimed to replicate and cross-validate the Rapid Screen of Concussion (RSC) for diagnosing mild TBI (mTBI). One hundred (81 male, 19 female) cases of mTBI and 35 (23 male and 12 female) cases of orthopaedic injuries were tested within 24 hr of injury. Double cross-validation was used to examine whether total RSC scores obtained in the cur-rent sample, generalised to one previously reported. In the new sample, mTBI patients answered fewer orientation questions, recalled fewer words on the learning trial and after a delay, judged fewer sentences in 2 min, and completed fewer symbols in the Digit Symbol Substitution Test than orthopaedic controls. The formulae and cut-offs developed on the original and new samples produced similar sensitivity and overall correct classification rates. Inclusion of the Digit Symbol Substitution Test performance of the new sample improved the sensitivity (80.2%) and specificity (82.6%) in males. It did not improve the correct classification rate in females, which was 89.5% sensitivity and 91.7% specificity before the inclusion of the Digit Symbol Substitution Test. Taken together, these results indicate that a combined score on this 12-min screen yields a measure of level of brain impairment up to 24 hr after mTBI.
Resumo:
This study examined the test performance of distortion product otoacoustic emissions (DPOAEs) when used as a screening tool in the school setting. A total of 1003 children (mean age 6.2 years, SD = 0.4) were tested with pure-tone screening, tympanometry, and DPOAE assessment. Optimal DPOAE test performance was determined in comparison with pure-tone screening results using clinical decision analysis. The results showed hit rates of 0.86, 0.89, and 0.90, and false alarm rates of 0.52, 0.19, and 0.22 for criterion signal-to-noise ratio (SNR) values of 4, 5, and 11 dB at 1.1, 1.9, and 3.8 kHz respectively. DPOAE test performance was compromised at 1.1 kHz. In view of the different test performance characteristics across the frequencies, the use of a fixed SNR as a pass criterion for all frequencies in DPOAE assessments is not recommended. When compared to pure tone plus tympanometry results, the DPOAEs showed deterioration in test performance, suggesting that the use of DPOAEs alone might miss children with subtle middle ear dysfunction. However, when the results of a test protocol, which incorporates both DPOAEs and tympanometry, were used in comparison with the gold standard of pure-tone screening plus tympanometry, test performance was enhanced. In view of its high performance, the use of a protocol that includes both DPOAEs and tympanometry holds promise as a useful tool in the hearing screening of schoolchildren, including difficult-to-test children.
Resumo:
Objectives: (1) To establish test performance measures for Transient Evoked Otoacoustic Emission testing of 6-year-old children in a school setting; (2) To investigate whether Transient Evoked Otoacoustic Emission testing provides a more accurate and effective alternative to a pure tone screening plus tympanometry protocol. Methods: Pure tone screening, tympanometry and transient evoked otoacoustic emission data were collected from 940 subjects (1880 ears), with a mean age of 6.2 years. Subjects were tested in non-sound-treated rooms within 22 schools. Receiver operating characteristics curves along with specificity, sensitivity, accuracy and efficiency values were determined for a variety of transient evoked otoacoustic emission/pure tone screening/tympanometry comparisons. Results: The Transient Evoked Otoacoustic Emission failure rate for the group was 20.3%. The failure rate for pure tone screening was found to be 8.9%, whilst 18.6% of subjects failed a protocol consisting of combined pure tone screening and tympanometry results. In essence, findings from the comparison of overall Transient Evoked Otoacoustic Emission pass/fail with overall pure tone screening pass/fail suggested that use of a modified Rhode Island Hearing Assessment Project criterion would result in a very high probability that a child with a pass result has normal hearing (true negative). However, the hit rate was only moderate. Selection of a signal-to-noise ratio (SNR) criterion set at greater than or equal to 1 dB appeared to provide the best test performance measures for the range of SNR values investigated. Test performance measures generally declined when tympanometry results were included, with the exception of lower false alarm rates and higher positive predictive values. The exclusion of low frequency data from the Transient Evoked Otoacoustic Emission SNR versus pure tone screening analysis resulted in improved performance measures. Conclusions: The present study poses several implications for the clinical implementation of Transient Evoked Otoacoustic Emission screening for entry level school children. Transient Evoked Otoacoustic Emission pass/fail criteria will require revision. The findings of the current investigation offer support to the possible replacement of pure tone screening with Transient Evoked Otoacoustic Emission testing for 6-year-old children. However, they do not suggest the replacement of the pure tone screening plus tympanometry battery. (C) 2001 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
Background: Estimates of the performance of carbohydrate deficient transferrin (CDT) and gamma glutamyltransferase (GGT) as markers of alcohol consumption have varied widely. Studies have differed in design and subject characteristics. The WHO/ISBRA Collaborative Study allows assessment and comparison of CDT, GGT, and aspartate aminotransferase (AST) as markers of drinking in a large, well-characterized, multicenter sample. Methods: A total of 1863 subjects were recruited from five countries (Australia, Brazil, Canada, Finland, and Japan). Recruitment was stratified by alcohol use, age, and sex. Demographic characteristics, alcohol consumption, and presence of ICD-10 dependence were recorded using an interview schedule based on the AUDADIS, CDT was assayed using CDTect(TM) and GGT and AST by standard methods. Statistical techniques included receiver operating characteristic (ROC) analysis. Multiple regression was used to measure the impact of factors other than alcohol on test performance. Results: CDT and GGT had comparable performance on ROC analysis, with AST performing slightly less well. CDT was a slightly but significantly better marker of high-risk consumption in men. All were more effective for detection of high-risk rather than intermediate-risk drinking. CDT and GGT levels were influenced by body mass index, sex, age, and smoking status. Conclusions: CDT was little better than GGT in detecting high- or intermediate-risk alcohol consumption in this large, multicenter, predominantly community-based sample. As the two tests are relatively independent of each other, their combination is likely to provide better performance than either test alone, Test interpretation should take account sex, age. and body mass index.
Resumo:
Increased professionalism in rugby has elicited rapid changes in the fitness profile of elite players. Recent research, focusing on the physiological and anthropometrical characteristics of rugby players, and the demands of competition are reviewed. The paucity of research on contemporary elite rugby players is highlighted, along with the need for standardised testing protocols. Recent data reinforce the pronounced differences in the anthropometric and physical characteristics of the forwards and backs. Forwards are typically heavier, taller, and have a greater proportion of body fat than backs. These characteristics are changing, with forwards developing greater total mass and higher muscularity. The forwards demonstrate superior absolute aerobic and anaerobic power, and Muscular strength. Results favour the backs when body mass is taken into account. The scaling of results to body mass can be problematic and future investigations should present results using power function ratios. Recommended tests for elite players include body mass and skinfolds, vertical jump, speed, and the multi-stage shuttle run. Repeat sprint testing is a possible avenue for more specific evaluation of players. During competition, high-intensity efforts are often followed by periods of incomplete recovery. The total work over the duration of a game is lower in the backs compared with the forwards; forwards spend greater time in physical contact with the opposition while the backs spend more time in free running, allowing them to cover greater distances. The intense efforts undertaken by rugby players place considerable stress on anaerobic energy sources, while the aerobic system provides energy during repeated efforts and for recovery. Training should focus on repeated brief high-intensity efforts with short rest intervals to condition players to the demands of the game. Training for the forwards should emphasise the higher work rates of the game, while extended rest periods can be provided to the backs. Players should not only be prepared for the demands of competition, but also the stress of travel and extreme environmental conditions. The greater professionalism of rugby union has increased scientific research in the sport; however, there is scope for significant refinement of investigations on the physiological demands of the game, and sports-specific testing procedures.
Resumo:
Helicobacter pylori infection is common among adults with intellectual disability. The acceptabilities and accuracies of different diagnostic tests in this population are unknown. We aimed to determine (i) patient acceptability and (ii) performance characteristics of serology, fecal-antigen, and urea breath tests among adults with intellectual disability. One hundred sixty-eight such adults underwent H. pylori testing with serology and fecal-antigen tests, and a portion underwent treatment. One year later, the participants were retested with fecal-antigen, serology, and urea breath tests. The numbers of specimens obtained and difficulties in collection reported by caregivers were noted. Test performance characteristics were assessed among participants and 65 of their caregivers, using serology as the reference. All participants provided at least one specimen, despite reported collection difficulties for 23% of fecal and 27% of blood specimens. Only 25% of the participants provided breath specimens; failure to perform this test was associated with lower intellectual ability and higher maladaptive behavior. The sensitivity, specificity, and positive and negative predictive values of the fecal test (baseline and 12 months versus caregivers) were 70 and 63 versus 81, 93 and 95 versus 98, 96 and 92 versus 93, and 53 and 74 versus 93%, respectively; those of the urea breath test (12 months versus caregivers) were 86 versus 100, 88 versus 95, 75 versus 89, and 94 versus 100%, respectively. With assistance, fecal or blood specimens for H. pylori assessment can be provided by most patients with intellectual disability regardless of their level of function or behavior. Only those with greater ability can perform the urea breath test. Using serology as the reference test, the limitations of performance characteristics of the fecal-antigen and urea breath tests are similar to those among a control group of caregivers.
Resumo:
The present study aimed to determine whether including a sensitive test of immediate and delayed recall would improve the diagnostic validity of the Rapid Screen of Concussion (RSC) in mild Traumatic Brain Injury (mTBI) versus orthopaedic clinical samples. Two studies were undertaken. In Study 1, the performance of 156 mTBI and 145 orthopaedic participants was analysed to identify the number of individuals who performed at ceiling on the verbal memory subtest of the RSC, as this test required immediate and delayed recall of only five words. A second aim was to determine the sensitivity and specificity levels of the RSC. Study 2 aimed to examine whether replacement of the verbal memory subtest with the 12-word Hopkins Verbal Learning Test (HVLT) could improve the sensitivity of the RSC in a new sample of 26 mTBI and 30 orthopaedic participants. Both studies showed that orthopaedic participants outperformed mTBI participants on each of the selected measures. Study 1 showed that 14% of mTBI participants performed at ceiling on the immediate and 21.2% on delayed recall test. Performance on the original battery yielded a sensitivity of 82%, specificity of 80% and overall correct classification of 81.5% participants. In Study 2, inclusion of the HVLT improved sensitivity to a level of 88.5%, decreased specificity to a level of 70% and resulted in an overall classification rate of 80%. It was concluded that although inclusion of the five-word subtest in the RSC can successfully distinguish concussed from non-concussed individuals, use of the HVLT in this protocol yields a more sensitive measure of subtle cognitive deficits following mTBI.
Resumo:
Theory of mind (ToM) was examined in late-signing deaf children in two studies by using standard tests and measures of spontaneous talk about inner states of perception, affect and cognition during storytelling. In Study 1, there were 21 deaf children aged 6 to 11 years and 13 typical-hearing children matched with the deaf by chronological age. In Study 2, there were 17 deaf children aged 6 to 12 years and 17 typical-hearing preschoolers aged 4 to 5 years who were matched with the deaf by ToM test performance. In addition to replicating the consistently reported finding of poor performance on standard false belief tests by late-signing deaf children, significant correlations emerged in both studies between deaf children's ToM test scores and their spontaneous narrative talk about imaginative cognition (e.g. 'pretend'). In Study 2, with a new set of purpose-built pictures that evoked richer and more complex mentalistic narration than the published picture book of Study 1, results of multiple regression analyses showed that children's narrative talk about imaginative cognition was uniquely important, over and above hearing status and talking of other kinds of mental states, in predicting ToM scores. The same was true of children's elaborated narrative talk using utterances that either spelt out thoughts, explained inner states or introduced contrastives. In addition, results of a Guttman scalograrn analysis in Study 2 suggested a consistent sequence in narrative and standard test performance by deaf and hearing children that went from (1) narrative mention of visible (affective or perceptual) mental states only, along with FB failure, to (2) narrative mention of cognitive states along with (1), to (3) elaborated narrative talk about inner states along with (2), and finally to (4) simple and elaborated narrative talk about affective/perceptual and cognitive states along with FIB test success. Possible explanations for this performance ordering, as well as for the observed correlations in both studies between ToM test scores and narrative variables, were considered.
Resumo:
Fuzzy signal detection analysis can be a useful complementary technique to traditional signal detection theory analysis methods, particularly in applied settings. For example, traffic situations are better conceived as being on a continuum from no potential for hazard to high potential, rather than either having potential or not having potential. This study examined the relative contribution of sensitivity and response bias to explaining differences in the hazard perception performance of novices and experienced drivers, and the effect of a training manipulation. Novice drivers and experienced drivers were compared (N = 64). Half the novices received training, while the experienced drivers and half the novices remained untrained. Participants completed a hazard perception test and rated potential for hazard in occluded scenes. The response latency of participants to the hazard perception test replicated previous findings of experienced/novice differences and trained/untrained differences. Fuzzy signal detection analysis of both the hazard perception task and the occluded rating task suggested that response bias may be more central to hazard perception test performance than sensitivity, with trained and experienced drivers responding faster and with a more liberal bias than untrained novices. Implications for driver training and the hazard perception test are discussed.
Resumo:
In this study, we examined genetic and environmental influences on covariation among two reading tests used in neuropsychological assessment (Cambridge Contextual Reading Test [CCRT], [Beardsall, L., and Huppert, F. A. ( 1994). J. Clin. Exp. Neuropsychol. 16: 232 - 242], Schonell Graded Word Reading Test [SGWRT], [ Schonell, F. J., and Schonell, P. E. ( 1960). Diagnostic and attainment testing. Edinburgh: Oliver and Boyd.]) and among a selection of IQ subtests from the Multidimensional Aptitude Battery (MAB), [Jackson, D. N. (1984). Multidimensional aptitude battery, Ontario: Research Psychologists Press.] and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) [Wechsler, D. (1981). Manual for the Wechsler Adult Intelligence Scale-Revised (WAIS-R). San Antonio: The Psychological Corporation]. Participants were 225 monozygotic and 275 dizygotic twin pairs aged from 15 years to 18 years ( mean, 16 years). For Verbal IQ subtests, phenotypic correlations with the reading tests ranged from 0.44 to 0.65. For Performance IQ subtests, phenotypic correlations with the reading tests ranged from 0.23 to 0.34. Results of Structural Equation Modeling (SEM) supported a model with one genetic General factor and three genetic group factors ( Verbal, Performance, Reading). Reading performance was influenced by the genetic General factor ( accounting for 13% and 20% of the variance for the CCRT and SGWRT, respectively), the genetic Verbal factor ( explaining 17% and 19% of variance for the CCRT and SGWRT), and the genetic Reading factor ( explaining 21% of the variance for both the CCRT and SGWRT). A common environment factor accounted for 25% and 14% of the CCRT and SGWRT variance, respectively. Genetic influences accounted for more than half of the phenotypic covariance between the reading tests and each of the IQ subtests. The heritabilities of the CCRT and SGWRT were 0.54 and 0.65, respectively. Observable covariance between reading assessments used by neuropsychologists to estimate IQ and IQ subtests appears to be largely due to genetic effects.