2 resultados para Modeling information
em Open University Netherlands
Resumo:
While most students seem to solve information problems effortlessly, research shows that the cognitive skills for effective information problem solving are often underdeveloped. Students manage to find information and formulate solutions, but the quality of their process and product is questionable. It is therefore important to develop instruction for fostering these skills. In this research, a 2-h online intervention was presented to first-year university students with the goal to improve their information problem solving skills while investigating effects of different types of built-in task support. A training design containing completion tasks was compared to a design using emphasis manipulation. A third variant of the training combined both approaches. In two experiments, these conditions were compared to a control condition receiving conventional tasks without built-in task support. Results of both experiments show that students' information problem solving skills are underdeveloped, which underlines the necessity for formal training. While the intervention improved students’ skills, no differences were found between conditions. The authors hypothesize that the effective presentation of supportive information in the form of a modeling example at the start of the training caused a strong learning effect, which masked effects of task support. Limitations and directions for future research are presented.
Resumo:
The current study builds upon a previous study, which examined the degree to which the lexical properties of students’ essays could predict their vocabulary scores. We expand on this previous research by incorporating new natural language processing indices related to both the surface- and discourse-levels of students’ essays. Additionally, we investigate the degree to which these NLP indices can be used to account for variance in students’ reading comprehension skills. We calculated linguistic essay features using our framework, ReaderBench, which is an automated text analysis tools that calculates indices related to linguistic and rhetorical features of text. University students (n = 108) produced timed (25 minutes), argumentative essays, which were then analyzed by ReaderBench. Additionally, they completed the Gates-MacGinitie Vocabulary and Reading comprehension tests. The results of this study indicated that two indices were able to account for 32.4% of the variance in vocabulary scores and 31.6% of the variance in reading comprehension scores. Follow-up analyses revealed that these models further improved when only considering essays that contained multiple paragraph (R2 values = .61 and .49, respectively). Overall, the results of the current study suggest that natural language processing techniques can help to inform models of individual differences among student writers.