11 resultados para codebook

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cognitive event-related potentials (ERPs) are widely employed in the study of dementive disorders. The morphology of averaged response is known to be under the influence of neurodegenerative processes and exploited for diagnostic purposes. This work is built over the idea that there is additional information in the dynamics of single-trial responses. We introduce a novel way to detect mild cognitive impairment (MCI) from the recordings of auditory ERP responses. Using single trial responses from a cohort of 25 amnestic MCI patients and a group of age-matched controls, we suggest a descriptor capable of encapsulating single-trial (ST) response dynamics for the benefit of early diagnosis. A customized vector quantization (VQ) scheme is first employed to summarize the overall set of ST-responses by means of a small-sized codebook of brain waves that is semantically organized. Each ST-response is then treated as a trajectory that can be encoded as a sequence of code vectors. A subject's set of responses is consequently represented as a histogram of activated code vectors. Discriminating MCI patients from healthy controls is based on the deduced response profiles and carried out by means of a standard machine learning procedure. The novel response representation was found to improve significantly MCI detection with respect to the standard alternative representation obtained via ensemble averaging (13% in terms of sensitivity and 6% in terms of specificity). Hence, the role of cognitive ERPs as biomarker for MCI can be enhanced by adopting the delicate description of our VQ scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social desirability and the fear of negative consequences often deter a considerable share of survey respondents from responding truthfully to sensitive questions. Thus, resulting prevalence estimates are biased. Indirect techniques for surveying sensitive questions such as the Randomized Response Technique are intended to mitigate misreporting by providing complete concealment of individual answers. However, it is far from clear whether these indirect techniques actually produce more valid measurements than standard direct questioning. In order to evaluate the validity of different sensitive question techniques we carried out an online validation experiment at Amazon Mechanical Turk in which respondents' self-reports of norm-breaking behavior (cheating in dice games) were validated against observed behavior. This document describes the design of the validation experiment and provides details on the questionnaire, the different sensitive question technique implementations, the field work, and the resulting dataset. The appendix contains a codebook of the data and facsimiles of the questionnaire pages and other survey materials.