2 resultados para pre-symptomatic testing
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Frame. Assessing the difficulty of source texts and parts thereof is important in CTIS, whether for research comparability, for didactic purposes or setting price differences in the market. In order to empirically measure it, Campbell & Hale (1999) and Campbell (2000) developed the Choice Network Analysis (CNA) framework. Basically, the CNA’s main hypothesis is that the more translation options (a group of) translators have to render a given source text stretch, the higher the difficulty of that text stretch will be. We will call this the CNA hypothesis. In a nutshell, this research project puts the CNA hypothesis to the test and studies whether it does actually measure difficulty. Data collection. Two groups of participants (n=29) of different profiles and from two universities in different countries had three translation tasks keylogged with Inputlog, and filled pre- and post-translation questionnaires. Participants translated from English (L2) into their L1s (Spanish or Italian), and worked—first in class and then at home—using their own computers, on texts ca. 800–1000 words long. Each text was translated in approximately equal halves in two 1-hour sessions, in three consecutive weeks. Only the parts translated at home were considered in the study. Results. A very different picture emerged from data than that which the CNA hypothesis might predict: there was no prevalence of disfluent task segments when there were many translation options, nor was a prevalence of fluent task segments associated to fewer translation options. Indeed, there was no correlation between the number of translation options (many and few) and behavioral fluency. Additionally, there was no correlation between pauses and both behavioral fluency and typing speed. The discussed theoretical flaws and the empirical evidence lead to the conclusion that the CNA framework does not and cannot measure text and translation difficulty.
Resumo:
In the field of educational and psychological measurement, the shift from paper-based to computerized tests has become a prominent trend in recent years. Computerized tests allow for more complex and personalized test administration procedures, like Computerized Adaptive Testing (CAT). CAT, following the Item Response Theory (IRT) models, dynamically generates tests based on test-taker responses, driven by complex statistical algorithms. Even if CAT structures are complex, they are flexible and convenient, but concerns about test security should be addressed. Frequent item administration can lead to item exposure and cheating, necessitating preventive and diagnostic measures. In this thesis a method called "CHeater identification using Interim Person fit Statistic" (CHIPS) is developed, designed to identify and limit cheaters in real-time during test administration. CHIPS utilizes response times (RTs) to calculate an Interim Person fit Statistic (IPS), allowing for on-the-fly intervention using a more secret item bank. Also, a slight modification is proposed to overcome situations with constant speed, called Modified-CHIPS (M-CHIPS). A simulation study assesses CHIPS, highlighting its effectiveness in identifying and controlling cheaters. However, it reveals limitations when cheaters possess all correct answers. The M-CHIPS overcame this limitation. Furthermore, the method has shown not to be influenced by the cheaters’ ability distribution or the level of correlation between ability and speed of test-takers. Finally, the method has demonstrated flexibility for the choice of significance level and the transition from fixed-length tests to variable-length ones. The thesis discusses potential applications, including the suitability of the method for multiple-choice tests, assumptions about RT distribution and level of item pre-knowledge. Also limitations are discussed to explore future developments such as different RT distributions, unusual honest respondent behaviors, and field testing in real-world scenarios. In summary, CHIPS and M-CHIPS offer real-time cheating detection in CAT, enhancing test security and ability estimation while not penalizing test respondents.