291 resultados para Turk
Resumo:
Social desirability and the fear of negative consequences often deter a considerable share of survey respondents from responding truthfully to sensitive questions. Thus, resulting prevalence estimates are biased. Indirect techniques for surveying sensitive questions such as the Randomized Response Technique are intended to mitigate misreporting by providing complete concealment of individual answers. However, it is far from clear whether these indirect techniques actually produce more valid measurements than standard direct questioning. In order to evaluate the validity of different sensitive question techniques we carried out an online validation experiment at Amazon Mechanical Turk in which respondents' self-reports of norm-breaking behavior (cheating in dice games) were validated against observed behavior. This document describes the design of the validation experiment and provides details on the questionnaire, the different sensitive question technique implementations, the field work, and the resulting dataset. The appendix contains a codebook of the data and facsimiles of the questionnaire pages and other survey materials.
Resumo:
Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may therefore be subject to considerable misreporting. To mitigate such misreporting, various indirect techniques for asking sensitive questions, such as the randomized response technique (RRT), have been proposed in the literature. In our study, we evaluate the viability of several variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents’ self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6,505). Our results indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT, we do observe a reduction of false negatives (that is, an increase in the proportion of cheaters who admit having cheated). At the same time, however, there is an increase in false positives (that is, an increase in non-cheaters who falsely admit having cheated). Overall, our findings suggest that none of the implemented sensitive questions techniques substantially outperforms direct questioning. Furthermore, our study demonstrates the importance of distinguishing false negatives and false positives when evaluating the validity of sensitive question techniques.
Resumo:
The Ocean Sampling Day (OSD) is a simultaneous sampling campaign of the world's oceans which took place (for the first time) on the summer solstice (June 21st) in the year 2014. These cumulative samples, related in time, space and environmental parameters, provide insights into fundamental rules describing microbial diversity and function and contribute to the blue economy through the identification of novel, ocean-derived biotechnologies. We see OSD data as a reference data set for generations of experiments to follow in the coming decade. The present data set includes a description of each sample collected during the Ocean Sampling Day 2014 and provides contextual environmental data measured concurrently with the collection of water samples for genomic analyses.