2 resultados para Word Sense Disambguaion, WSD, Natural Language Processing

em DigitalCommons@University of Nebraska - Lincoln


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this action research study of my classroom of sixth grade mathematics, I investigated word problems. I discovered that my students did not like to try word problems because they did not understand what was being asked of them. My students also saw no reason for solving word problems or in having the ability to solve them. I used word problems that covered topics that were familiar to the students and that covered the skills necessary at the sixth grade level. I wanted to deepen their understanding of math and its importance. By having my students journal to me about the steps that they had taken along the way to solve the word problem I was able to see where confusion occurred. Consequently I was able to help clarify where my students made mistakes. Also, through writing down the steps taken, students did see more clearly where their errors took place. Each time that my students wrote their explanations to the steps that they used in solving the word problems they did solved them more easily. As I observed my students they took more time in writing their explanations and did not look at it as such a difficult task anymore.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study investigated the influence of top-down and bottom-up information on speech perception in complex listening environments. Specifically, the effects of listening to different types of processed speech were examined on intelligibility and on simultaneous visual-motor performance. The goal was to extend the generalizability of results in speech perception to environments outside of the laboratory. The effect of bottom-up information was evaluated with natural, cell phone and synthetic speech. The effect of simultaneous tasks was evaluated with concurrent visual-motor and memory tasks. Earlier works on the perception of speech during simultaneous visual-motor tasks have shown inconsistent results (Choi, 2004; Strayer & Johnston, 2001). In the present experiments, two dual-task paradigms were constructed in order to mimic non-laboratory listening environments. In the first two experiments, an auditory word repetition task was the primary task and a visual-motor task was the secondary task. Participants were presented with different kinds of speech in a background of multi-speaker babble and were asked to repeat the last word of every sentence while doing the simultaneous tracking task. Word accuracy and visual-motor task performance were measured. Taken together, the results of Experiments 1 and 2 showed that the intelligibility of natural speech was better than synthetic speech and that synthetic speech was better perceived than cell phone speech. The visual-motor methodology was found to demonstrate independent and supplemental information and provided a better understanding of the entire speech perception process. Experiment 3 was conducted to determine whether the automaticity of the tasks (Schneider & Shiffrin, 1977) helped to explain the results of the first two experiments. It was found that cell phone speech allowed better simultaneous pursuit rotor performance only at low intelligibility levels when participants ignored the listening task. Also, simultaneous task performance improved dramatically for natural speech when intelligibility was good. Overall, it could be concluded that knowledge of intelligibility alone is insufficient to characterize processing of different speech sources. Additional measures such as attentional demands and performance of simultaneous tasks were also important in characterizing the perception of different kinds of speech in complex listening environments.