Generalization by symbolic abstraction in cascaded recurrent networks


Autoria(s): Boden, Mikael
Contribuinte(s)

T. Villman

Data(s)

01/03/2004

Resumo

Generalization performance in recurrent neural networks is enhanced by cascading several networks. By discretizing abstractions induced in one network, other networks can operate on a coarse symbolic level with increased performance on sparse and structural prediction tasks. The level of systematicity exhibited by the cascade of recurrent networks is assessed on the basis of three language domains. (C) 2004 Elsevier B.V. All rights reserved.

Identificador

http://espace.library.uq.edu.au/view/UQ:71094

Idioma(s)

eng

Publicador

Elsevier Science

Palavras-Chave #Recurrent Neural Network #Language #Generalization #Systematicity #Explicit Negative Evidence #Language-acquisition #Neural Networks #Starting Small #Context-free #Systematicity #Connectionism #Dynamics #Absence #Time #C1 #280212 Neural Networks, Genetic Alogrithms and Fuzzy Logic #780101 Mathematical sciences
Tipo

Journal Article