2 resultados para forecast errors
em Duke University
Resumo:
Using the wisdom of crowds---combining many individual forecasts to obtain an aggregate estimate---can be an effective technique for improving forecast accuracy. When individual forecasts are drawn from independent and identical information sources, a simple average provides the optimal crowd forecast. However, correlated forecast errors greatly limit the ability of the wisdom of crowds to recover the truth. In practice, this dependence often emerges because information is shared: forecasters may to a large extent draw on the same data when formulating their responses.
To address this problem, I propose an elicitation procedure in which each respondent is asked to provide both their own best forecast and a guess of the average forecast that will be given by all other respondents. I study optimal responses in a stylized information setting and develop an aggregation method, called pivoting, which separates individual forecasts into shared and private information and then recombines these results in the optimal manner. I develop a tailored pivoting procedure for each of three information models, and introduce a simple and robust variant that outperforms the simple average across a variety of settings.
In three experiments, I investigate the method and the accuracy of the crowd forecasts. In the first study, I vary the shared and private information in a controlled environment, while the latter two studies examine forecasts in real-world contexts. Overall, the data suggest that a simple minimal pivoting procedure provides an effective aggregation technique that can significantly outperform the crowd average.
Resumo:
People are always at risk of making errors when they attempt to retrieve information from memory. An important question is how to create the optimal learning conditions so that, over time, the correct information is learned and the number of mistakes declines. Feedback is a powerful tool, both for reinforcing new learning and correcting memory errors. In 5 experiments, I sought to understand the best procedures for administering feedback during learning. First, I evaluated the popular recommendation that feedback is most effective when given immediately, and I showed that this recommendation does not always hold when correcting errors made with educational materials in the classroom. Second, I asked whether immediate feedback is more effective in a particular case—when correcting false memories, or strongly-held errors that may be difficult to notice even when the learner is confronted with the feedback message. Third, I examined whether varying levels of learner motivation might help to explain cross-experimental variability in feedback timing effects: Are unmotivated learners less likely to benefit from corrective feedback, especially when it is administered at a delay? Overall, the results revealed that there is no best “one-size-fits-all” recommendation for administering feedback; the optimal procedure depends on various characteristics of learners and their errors. As a package, the data are consistent with the spacing hypothesis of feedback timing, although this theoretical account does not successfully explain all of the data in the larger literature.