951 resultados para Learning Ability
Resumo:
This study investigates the long-term effects of training in small-group and interpersonal behaviours on children's behaviours and interactions as they worked in small groups two years after they were initially trained. Forty-eight third grade children, who had been trained two years previously in cooperative group behaviours, were assigned to the Trained condition and 44 third grade children who had not previously been trained were assigned to the Untrained condition. The children in the trained and untrained groups were reconstituted from the pool of students who had participated previously in either trained or untrained group activities. The results showed that there was a long-term training effect with the children in the Trained groups demonstrating more cooperative behaviour and providing more explanations in response to requests for help than their untrained peers.
Resumo:
The author investigated how training in small-group and interpersonal behaviors affected children's behavior and interactions as they worked in small groups 2 years later. The authors assigned 52 fifth graders, who had been trained 2 years previously in cooperative group behaviors, to the trained condition and 36 fifth graders, who had not previously been trained, to the untrained condition. Both were reconstituted from the pool of students who had participated previously in group activities. The results showed a residual training effect, with the children in the trained groups being more cooperative and helpful than their untrained peers.
Resumo:
The long short-term memory (LSTM) is not the only neural network which learns a context sensitive language. Second-order sequential cascaded networks (SCNs) are able to induce means from a finite fragment of a context-sensitive language for processing strings outside the training set. The dynamical behavior of the SCN is qualitatively distinct from that observed in LSTM networks. Differences in performance and dynamics are discussed.
Resumo:
Input-driven models provide an explicit and readily testable account of language learning. Although we share Ellis's view that the statistical structure of the linguistic environment is a crucial and, until recently, relatively neglected variable in language learning, we also recognize that the approach makes three assumptions about cognition and language learning that are not universally shared. The three assumptions concern (a) the language learner as an intuitive statistician, (b) the constraints on what constitute relevant surface cues, and (c) the redescription problem faced by any system that seeks to derive abstract grammatical relations from the frequency of co-occurring surface forms and functions. These are significant assumptions that must be established if input-driven models are to gain wider acceptance. We comment on these issues and briefly describe a distributed, instance-based approach that retains the key features of the input-driven account advocated by Ellis but that also addresses shortcomings of the current approaches.