983 resultados para items
Resumo:
In recent years many real time applications need to handle data streams. We consider the distributed environments in which remote data sources keep on collecting data from real world or from other data sources, and continuously push the data to a central stream processor. In these kinds of environments, significant communication is induced by the transmitting of rapid, high-volume and time-varying data streams. At the same time, the computing overhead at the central processor is also incurred. In this paper, we develop a novel filter approach, called DTFilter approach, for evaluating the windowed distinct queries in such a distributed system. DTFilter approach is based on the searching algorithm using a data structure of two height-balanced trees, and it avoids transmitting duplicate items in data streams, thus lots of network resources are saved. In addition, theoretical analysis of the time spent in performing the search, and of the amount of memory needed is provided. Extensive experiments also show that DTFilter approach owns high performance.
Resumo:
The convergence on the Big Five in personality theory has produced a demand for efficient yet psychometrically sound measures. Therefore, five single-item measures, using bipolar response scales, were constructed to measure the Big Five and evaluated in terms of their convergent and off-diagonal divergent properties, their pattern of criterion correlations and their reliability when compared with four longer Big Five measures. In a combined sample (N?=?791) the Single-Item Measures of Personality (SIMP) demonstrated a mean convergence of r?=?0.61 with the longer scales. The SIMP also demonstrated acceptable reliability, self–other accuracy, and divergent correlations, and a closely similar pattern of criterion correlations when compared with the longer scales. It is concluded that the SIMP offer a reasonable alternative to longer scales, balancing the demands of brevity versus reliability and validity.
Resumo:
The Biased Competition Model (BCM) suggests both top-down and bottom-up biases operate on selective attention (e.g., Desimone & Duncan, 1995). It has been suggested that top-down control signals may arise from working memory. In support, Downing (2000) found faster responses to probes presented in the location of stimuli held vs. not held in working memory. Soto, Heinke, Humphreys, and Blanco (2005) showed the involuntary nature of this effect and that shared features between stimuli were sufficient to attract attention. Here we show that stimuli held in working memory had an influence on the deployment of attentional resources even when: (1) It was detrimental to the task, (2) there was equal prior exposure, and (3) there was no bottom-up priming. These results provide further support for involuntary top-down guidance of attention from working memory and the basic tenets of the BCM, but further discredit the notion that bottom-up priming is necessary for the effect to occur.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015
Resumo:
Context effects in a personality scale were examined by determining if conscientiousness scale (C) scores were significantly different when administered alone vs. part of a Five Factor Model inventory (Big5). The effectiveness of individual difference variables (IDVs) as predictors of the context effect was also examined. The experiment compared subjects who completed the full Big5 once and the C alone once (Big5/C or C/Big5) to subjects who complete either the Big5 inventory twice (Big5/Big5) or the C twice (C/C). No significant differences were found. When Big5/C and C/Big5 groups were combined, IDVs were tested, and only the field dependence variable (R2 = .06) was found to significantly predict the context effect. However, the small R2 minimized concerns of context effects in Big5 inventories. ^
Resumo:
This study investigated the effect of the number of syllables and the word frequency of the words in the reading passages, the question stems, and the answer options of easy and difficult reading comprehension items. Significant differences were found for the easy and difficult items.
Resumo:
Current interest in measuring quality of life is generating interest in the construction of computerized adaptive tests (CATs) with Likert-type items. Calibration of an item bank for use in CAT requires collecting responses to a large number of candidate items. However, the number is usually too large to administer to each subject in the calibration sample. The concurrent anchor-item design solves this problem by splitting the items into separate subtests, with some common items across subtests; then administering each subtest to a different sample; and finally running estimation algorithms once on the aggregated data array, from which a substantial number of responses are then missing. Although the use of anchor-item designs is widespread, the consequences of several configuration decisions on the accuracy of parameter estimates have never been studied in the polytomous case. The present study addresses this question by simulation, comparing the outcomes of several alternatives on the configuration of the anchor-item design. The factors defining variants of the anchor-item design are (a) subtest size, (b) balance of common and unique items per subtest, (c) characteristics of the common items, and (d) criteria for the distribution of unique items across subtests. The results of this study indicate that maximizing accuracy in item parameter recovery requires subtests of the largest possible number of items and the smallest possible number of common items; the characteristics of the common items and the criterion for distribution of unique items do not affect accuracy.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Incomplete reporting has been identified as a major source of avoidable waste in biomedical research.
Essential information is often not provided in study reports, impeding the identification, critical
appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy
studies, the Standards for Reporting Diagnostic Accuracy (STARD) statement was developed. Here
we present STARD 2015, an updated list of 30 essential items that should be included in every
report of a diagnostic accuracy study. This update incorporates recent evidence about sources of
bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such,
STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy
studies.