905 resultados para Second language (L2) learning
Resumo:
This four-experiment series sought to evaluate the potential of children with neurosensory deafness and cochlear implants to exhibit auditory-visual and visual-visual stimulus equivalence relations within a matching-to-sample format. Twelve children who became deaf prior to acquiring language (prelingual) and four who became deaf afterwards (postlingual) were studied. All children learned auditory-visual conditional discriminations and nearly all showed emergent equivalence relations. Naming tests, conducted with a subset of the: children, showed no consistent relationship to the equivalence-test outcomes.. This study makes several contributions: to the literature on stimulus equivalence. First; it demonstrates that both pre- and postlingually deaf children-can: acquire auditory-visual equivalence-relations after cochlear implantation, thus demonstrating symbolic functioning. Second, it directs attention to a population that may be especially interesting for researchers seeking to analyze the relationship. between speaker and listener repertoires. Third, it demonstrates the feasibility of conducting experimental studies of stimulus control processes within the limitations of a hospital, which these children must visit routinely for the maintenance of their cochlear implants.
Resumo:
The long short-term memory (LSTM) is not the only neural network which learns a context sensitive language. Second-order sequential cascaded networks (SCNs) are able to induce means from a finite fragment of a context-sensitive language for processing strings outside the training set. The dynamical behavior of the SCN is qualitatively distinct from that observed in LSTM networks. Differences in performance and dynamics are discussed.
Resumo:
Input-driven models provide an explicit and readily testable account of language learning. Although we share Ellis's view that the statistical structure of the linguistic environment is a crucial and, until recently, relatively neglected variable in language learning, we also recognize that the approach makes three assumptions about cognition and language learning that are not universally shared. The three assumptions concern (a) the language learner as an intuitive statistician, (b) the constraints on what constitute relevant surface cues, and (c) the redescription problem faced by any system that seeks to derive abstract grammatical relations from the frequency of co-occurring surface forms and functions. These are significant assumptions that must be established if input-driven models are to gain wider acceptance. We comment on these issues and briefly describe a distributed, instance-based approach that retains the key features of the input-driven account advocated by Ellis but that also addresses shortcomings of the current approaches.
Resumo:
In this paper, we intend to present some research carried out in a state Primary school, which is very well-equipped with ICT resources, including interactive whiteboards. The interactive whiteboard was used in the context of a Unit of Work for English learning, based on a traditional oral story, ‘Jack and the Beanstalk’. It was also used for reinforcing other topics like, ‘At the beach’, ‘In the city’, ‘Jobs’, etc. An analysis of the use of the digital board, which includes observation records as well as questionnaires for teachers and pupils, was carried out.
Resumo:
Sendo uma forma natural de interação homem-máquina, o reconhecimento de gestos implica uma forte componente de investigação em áreas como a visão por computador e a aprendizagem computacional. O reconhecimento gestual é uma área com aplicações muito diversas, fornecendo aos utilizadores uma forma mais natural e mais simples de comunicar com sistemas baseados em computador, sem a necessidade de utilização de dispositivos extras. Assim, o objectivo principal da investigação na área de reconhecimento de gestos aplicada à interacção homemmáquina é o da criação de sistemas, que possam identificar gestos específicos e usálos para transmitir informações ou para controlar dispositivos. Para isso as interfaces baseados em visão para o reconhecimento de gestos, necessitam de detectar a mão de forma rápida e robusta e de serem capazes de efetuar o reconhecimento de gestos em tempo real. Hoje em dia, os sistemas de reconhecimento de gestos baseados em visão são capazes de trabalhar com soluções específicas, construídos para resolver um determinado problema e configurados para trabalhar de uma forma particular. Este projeto de investigação estudou e implementou soluções, suficientemente genéricas, com o recurso a algoritmos de aprendizagem computacional, permitindo a sua aplicação num conjunto alargado de sistemas de interface homem-máquina, para reconhecimento de gestos em tempo real. A solução proposta, Gesture Learning Module Architecture (GeLMA), permite de forma simples definir um conjunto de comandos que pode ser baseado em gestos estáticos e dinâmicos e que pode ser facilmente integrado e configurado para ser utilizado numa série de aplicações. É um sistema de baixo custo e fácil de treinar e usar, e uma vez que é construído unicamente com bibliotecas de código. As experiências realizadas permitiram mostrar que o sistema atingiu uma precisão de 99,2% em termos de reconhecimento de gestos estáticos e uma precisão média de 93,7% em termos de reconhecimento de gestos dinâmicos. Para validar a solução proposta, foram implementados dois sistemas completos. O primeiro é um sistema em tempo real capaz de ajudar um árbitro a arbitrar um jogo de futebol robótico. A solução proposta combina um sistema de reconhecimento de gestos baseada em visão com a definição de uma linguagem formal, o CommLang Referee, à qual demos a designação de Referee Command Language Interface System (ReCLIS). O sistema identifica os comandos baseados num conjunto de gestos estáticos e dinâmicos executados pelo árbitro, sendo este posteriormente enviado para um interface de computador que transmite a respectiva informação para os robôs. O segundo é um sistema em tempo real capaz de interpretar um subconjunto da Linguagem Gestual Portuguesa. As experiências demonstraram que o sistema foi capaz de reconhecer as vogais em tempo real de forma fiável. Embora a solução implementada apenas tenha sido treinada para reconhecer as cinco vogais, o sistema é facilmente extensível para reconhecer o resto do alfabeto. As experiências também permitiram mostrar que a base dos sistemas de interação baseados em visão pode ser a mesma para todas as aplicações e, deste modo facilitar a sua implementação. A solução proposta tem ainda a vantagem de ser suficientemente genérica e uma base sólida para o desenvolvimento de sistemas baseados em reconhecimento gestual que podem ser facilmente integrados com qualquer aplicação de interface homem-máquina. A linguagem formal de definição da interface pode ser redefinida e o sistema pode ser facilmente configurado e treinado com um conjunto de gestos diferentes de forma a serem integrados na solução final.
Resumo:
This study investigates the way of learning the English language in Portugal. First-year students of the faculty of Social Sciences and Humanities of New University of Lisbon were selected as participants in the case study. As data collection tools a questionnaire and focus-groups were used. 115 students completed the designed questionnaire and after that 12 students were selected for the more detailed focus-group discussions. Results of the research show that most part of the students´ English knowledge is received from outside the classroom by means of movies, songs, computer games, the Internet, communication with friends and other sources. Also, the results show that motivation is very important in language learning process and motivated students acquire the language faster and easier.
Resumo:
Sign language is the form of communication used by Deaf people, which, in most cases have been learned since childhood. The problem arises when a non-Deaf tries to contact with a Deaf. For example, when non-Deaf parents try to communicate with their Deaf child. In most cases, this situation tends to happen when the parents did not have time to properly learn sign language. This dissertation proposes the teaching of sign language through the usage of serious games. Currently, similar solutions to this proposal do exist, however, those solutions are scarce and limited. For this reason, the proposed solution is composed of a natural user interface that is intended to create a new concept on this field. The validation of this work, consisted on the implementation of a serious game prototype, which can be used as a source for learning (Portuguese) sign language. On this validation, it was first implemented a module responsible for recognizing sign language. This first stage, allowed the increase of interaction and the construction of an algorithm capable of accurately recognizing sign language. On a second stage of the validation, the proposal was studied so that the pros and cons can be determined and considered on future works.
Resumo:
Currently, it is widely perceived among the English as a Foreign Language (EFL) teaching professionals, that motivation is a central factor for success in language learning. This work aims to examine and raise teachers’ awareness about the role of assessment and feedback in the process of language teaching and learning at polytechnic school in Benguela to develop and/or enhance their students’ motivation for learning. Hence the paper defines and discusses the key terms and, the techniques and strategies for an effective feedback provision in the context under study. It also collects data through the use of interview and questionnaire methods, and suggests the assessment and feedback types to be implemented at polytechnic school in Benguela
Resumo:
"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
Vision-based hand gesture recognition is an area of active current research in computer vision and machine learning. Being a natural way of human interaction, it is an area where many researchers are working on, with the goal of making human computer interaction (HCI) easier and natural, without the need for any extra devices. So, the primary goal of gesture recognition research is to create systems, which can identify specific human gestures and use them, for example, to convey information. For that, vision-based hand gesture interfaces require fast and extremely robust hand detection, and gesture recognition in real time. Hand gestures are a powerful human communication modality with lots of potential applications and in this context we have sign language recognition, the communication method of deaf people. Sign lan- guages are not standard and universal and the grammars differ from country to coun- try. In this paper, a real-time system able to interpret the Portuguese Sign Language is presented and described. Experiments showed that the system was able to reliably recognize the vowels in real-time, with an accuracy of 99.4% with one dataset of fea- tures and an accuracy of 99.6% with a second dataset of features. Although the im- plemented solution was only trained to recognize the vowels, it is easily extended to recognize the rest of the alphabet, being a solid foundation for the development of any vision-based sign language recognition user interface system.
Resumo:
Tese de Doutoramento em Engenharia de Eletrónica e de Computadores
Resumo:
How long does it take to learn another language? How many words do you need to learn? Are languages within the reach of everybody? Which teachers would you choose and which teachers should you avoid? These are some of the questions you ask yourself when you start learning a new language.The Word Brain provides the answers. If you have learned foreign languages in the past, consider reading it. If you or your children need to learn languages in the future, you must read it. What you will discover in two hours will change for ever the way you see languages and language learning. The principles of The Word Brain are timeless. Our children’s grandchildren will follow them when they discover the people of our planet.