2 resultados para CRITERION-RELATED VALIDITY

em Cambridge University Engineering Department Publications Database


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Statistics Anxiety Rating Scale (STARS) was adapted into German to examine its psychometric properties (n = 400). Two validation studies (n = 66, n = 96) were conducted to examine its criterion-related validity. The psychometric properties of the questionnaire were very similar to those previously reported for the original English version in various countries and other language versions. Confirmatory factor analysis indicated 2 second-order factors: One was more closely related to anxiety and the other was more closely related to negative attitudes toward statistics. Predictive validity of the STARS was shown both in an experimental exam-like situation in the laboratory and during a real examination situation. Taken together, the findings indicate that statistics anxiety as assessed by the STARS is a useful construct that is more than just an expression of a more general disposition to anxiety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.