879 resultados para Portuguese sign language recognition
Resumo:
A presente investigação mostra a importância do contacto de crianças muito jovens com línguas estrangeiras. Este trabalho concentra-se na tentativa de investigar, numa abordagem plurilingue, com enfoque para a Língua Inglesa e a Língua Gestual Portuguesa, a sensibilização de um grupo de alunos do 1º Ciclo do Ensino Básico para uma língua diferente da sua língua materna. Nesta pesquisa, adotou-se uma postura de investigação-ação, apoiando-se com grande particularidade numa metodologia qualitativa e com menor relevância numa metodologia quantitativa, onde os alunos, através das várias atividades que desenvolveram foram adquirindo diferentes competências nas duas línguas. Isto permitiu aos alunos despertarem todas as suas potencialidades para a aprendizagem destas duas línguas (Língua Inglesa e Língua Gestual Portuguesa), tendo como ponto de partida a sua sensibilização e a aprendizagem de alguns vocábulos. Acreditamos que esta abordagem plurilíngue poderá auxiliar os alunos no desenvolvimento de habilidades linguísticas, cognitivas e pessoais tais como: a intercompreensão, o conhecimento de características específicas de diferentes línguas existentes em seu redor, a comparação linguística entre elas, a sua compreensão lexical, e por fim a competência em relacionar as línguas a culturas, e acima de tudo, o respeito e valorização da diversidade linguística e cultural. Foram utilizadas nas aulas atividades de nível de compreensão e produção oral, num processo de sensibilização e aprendizagem de alguns vocábulos destas línguas, sendo que os resultados foram posteriormente analisados, através de grelhas de observação das atividades, de dois inquéritos por questionário e fotos. Das observações e conclusões retiradas desta análise, confirmou-se que a sensibilização quanto à Língua Inglesa assim como quanto à Língua Gestual Portuguesa promove o desenvolvimento da criança, assim como a valorização da respetiva diversidade linguística e cultural.
Resumo:
Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Informática e Computadores
Resumo:
The purpose of this study was to verify discriminative control by segments of signs in adolescents with deafness who use Brazilian Sign Language (BSL). Four adolescent with bilateral deafness, with 3 years of BSL teaching, saw a video presenting a children's tale in BSL. After showing accurate understanding of the story, participants saw another video of the same story with 12 signs altered in one of their segments (hand configuration, place of articulation, or movement). They apparently did not detect the alterations. However, when the signs were presented in isolation in a matching-to-sample test, they virtually always selected the picture corresponding to the unaltered signs. Three participants selected an unfamiliar picture in 50% or more trials with an altered sign as a sample, showing that they could detect the majority of the altered signs.
Resumo:
This essay shows the report of a series of searches that deal the acquisition of a first and a second language by deaf children, in inclusive contexts. Due to hearing deprivation, and for not having a whole acoustic duct, deaf people end up not acquiring naturally the language that is common for Brazilians in general. Among the searches carried out, those that deal the written expression of deaf undergraduates, whose path in the acquisition of the language(s) did not follow the model prescribed by current theoreticians. The search shows that the analyzed students did not acquire sign language as first language in the first phase of childhood and Portuguese as second language, contradicting the bilingual model adopted in Brazil
Resumo:
The TV is a potential midia of communication that affects all social classes and it is available in 98% of Brazilian homes. It has been used as a distributor of educational materials since the 1950s. By 2016 the Open Digital TV (ODTV) in Brazil might cover the entire national territory, replacing the TV with analog signals. Concerns about accessibility for People with Special Needs (PSN) in that media have become more intense since the 1990s worldwide. In Brazil, it is estimated 24.6 million of PSN, 23% of them having some type of hearing loss. Of these, 2.9% are reported as deafs. Brazilian Sign Language (LIBRAS) is considered the first literacy language for deaf people in Brazil. In this context, this paper presents a proposal to facilitate the generation of educational content for ODTV based on two components. One is called SynchrLIBRAS and allows subtitles synchronization in Portuguese and a LIBRAS translator window of videos downloaded from the Web. The second component allows the visualization of this content through the Brazilian System of Digital TV and IPTV - environments that implement the middleware Ginga-NCL. The main focus of this paper is the presentation of the first component: SynchrLIBRAS. This proposal has educational purposes, contributing to teach LIBRAS to people who may collaborate with social inclusion of the deaf people.
Resumo:
This paper presents a methodology for adapting an advanced communication system for deaf people in a new domain. This methodology is a user-centered design approach consisting of four main steps: requirement analysis, parallel corpus generation, technology adaptation to the new domain, and finally, system evaluation. In this paper, the new considered domain has been the dialogues in a hotel reception. With this methodology, it was possible to develop the system in a few months, obtaining very good performance: good speech recognition and translation rates (around 90%) with small processing times.
Resumo:
Brazilian sign language is a language still rarely studied by the sociolinguistics few years due to its legislation and scientific recognition. However, this is a language in Brazil since the imperial years according to the records available at the National Institute for Deaf Education. Aiming to contribute to other sociolinguistic investigations of nature, we investigated the occurrence of linguistic variation in the specific case of the signals used to father and mother in the capital city of Florianópolis. The results showed changes in language use of two signals, what was once considered standard variant is shown in the process of disuse, new variants are emerging and prestigious yet been possible to confirm a process of historical change related to cultural transformations and social life.
Resumo:
Objective: There are currently no adult mental health outcome measures that have been translated into Australian sign language (Auslan). Without a valid and reliable Auslan outcome measure, empirical research into the efficacy of mental health interventions for sign language users is unattainable. To address this research problem the Outcome Rating Scale (ORS), a measure of general functioning, was translated into Auslan and recorded on to digital video disk for use in clinical settings. The purpose of the present study was therefore to examine the reliability, validity and acceptability of an Auslan version of the ORS (ORS-Auslan). Method: The ORS-Auslan was administered to 44 deaf people who use Auslan as their first language and who identify as members of a deaf community (termed ‘Deaf’ people) on their first presentation to a mental health or counselling facility and to 55 Deaf people in the general community. The community sample also completed an Auslan version of the Depression Anxiety Stress Scale-21 (DASS-21). Results: t-Tests indicated significant differences between the mean scores for the clinical and community sample. Internal consistency was acceptable given the low number of items in the ORS-Auslan. Construct validity was established by significant correlations between total scores on the DASS-21-Auslan and ORS-Auslan. Acceptability of ORS-Auslan was evident in the completion rate of 93% compared with 63% for DASS-21-Auslan. Conclusions: This is the only Auslan outcome measure available that can be used across a wide variety of mental health and clinical settings. The ORS-Auslan provides mental health clinicians with a reliable and valid, brief measure of general functioning that can significantly distinguish between clinical and non-clinical presentations for members of the Deaf community.
Resumo:
For sign languages used by deaf communities, linguistic corpora have until recently been unavailable, due to the lack of a writing system and a written culture in these communities, and the very recent advent of digital video. Recent improvements in video and computer technology have now made larger sign language datasets possible; however, large sign language datasets that are fully machine-readable are still elusive. This is due to two challenges. 1. Inconsistencies that arise when signs are annotated by means of spoken/written language. 2. The fact that many parts of signed interaction are not necessarily fully composed of lexical signs (equivalent of words), instead consisting of constructions that are less conventionalised. As sign language corpus building progresses, the potential for some standards in annotation is beginning to emerge. But before this project, there were no attempts to standardise these practices across corpora, which is required to be able to compare data crosslinguistically. This project thus had the following aims: 1. To develop annotation standards for glosses (lexical/word level) 2. To test their reliability and validity 3. To improve current software tools that facilitate a reliable workflow Overall the project aimed not only to set a standard for the whole field of sign language studies throughout the world but also to make significant advances toward two of the world’s largest machine-readable datasets for sign languages – specifically the BSL Corpus (British Sign Language, http://bslcorpusproject.org) and the Corpus NGT (Sign Language of the Netherlands, http://www.ru.nl/corpusngt).
Resumo:
The aim of the present study was to investigate the functional role of syllables in sign language and how the different phonological combinations influence sign production. Moreover, the influence of age of acquisition was evaluated. Deaf signers (native and non-native) of Catalan Signed Language (LSC) were asked in a picture-sign interference task to sign picture names while ignoring distractor-signs with which they shared two phonological parameters (out of three of the main sign parameters: Location, Movement, and Handshape). The results revealed a different impact of the three phonological combinations. While no effect was observed for the phonological combination Handshape-Location, the combination Handshape-Movement slowed down signing latencies, but only in the non-native group. A facilitatory effect was observed for both groups when pictures and distractors shared Location-Movement. Importantly, linguistic models have considered this phonological combination to be a privileged unit in the composition of signs, as syllables are in spoken languages. Thus, our results support the functional role of syllable units during phonological articulation in sign language production.
Resumo:
An automated system for detection of head movements is described. The goal is to label relevant head gestures in video of American Sign Language (ASL) communication. In the system, a 3D head tracker recovers head rotation and translation parameters from monocular video. Relevant head gestures are then detected by analyzing the length and frequency of the motion signal's peaks and valleys. Each parameter is analyzed independently, due to the fact that a number of relevant head movements in ASL are associated with major changes around one rotational axis. No explicit training of the system is necessary. Currently, the system can detect "head shakes." In experimental evaluation, classification performance is compared against ground-truth labels obtained from ASL linguists. Initial results are promising, as the system matches the linguists' labels in a significant number of cases.
Resumo:
Locating hands in sign language video is challenging due to a number of factors. Hand appearance varies widely across signers due to anthropometric variations and varying levels of signer proficiency. Video can be captured under varying illumination, camera resolutions, and levels of scene clutter, e.g., high-res video captured in a studio vs. low-res video gathered by a web cam in a user’s home. Moreover, the signers’ clothing varies, e.g., skin-toned clothing vs. contrasting clothing, short-sleeved vs. long-sleeved shirts, etc. In this work, the hand detection problem is addressed in an appearance matching framework. The Histogram of Oriented Gradient (HOG) based matching score function is reformulated to allow non-rigid alignment between pairs of images to account for hand shape variation. The resulting alignment score is used within a Support Vector Machine hand/not-hand classifier for hand detection. The new matching score function yields improved performance (in ROC area and hand detection rate) over the Vocabulary Guided Pyramid Match Kernel (VGPMK) and the traditional, rigid HOG distance on American Sign Language video gestured by expert signers. The proposed match score function is computationally less expensive (for training and testing), has fewer parameters and is less sensitive to parameter settings than VGPMK. The proposed detector works well on test sequences from an inexpert signer in a non-studio setting with cluttered background.
Resumo:
The present article analyses the preferences of the deaf who use sign language and are users of the TV interpretation service to sign language, as well as the characteristics with which TV channels provide that service in television in Spain. The objective is to establish whether the way in which the aforementioned accessibility service is provided matches the preferences of users or differ from them. The analysis presents the opinion on this service of the deaf that use the Spanish sign language as their first language for communication. A study has also been conducted on the programmes broadcast with sign language during week 10-16/03/2014. The main data collected reveal that the deaf are dissatisfied with broadcasting times. They ask for news programmes with sign language, they would rather have the interpretation carried out by deaf people who use sign language and they prefer that the interpreter is the main image on screen. Concerning the analysis of the programmes broadcast, the study shows that the majority of programmes with sign language are broadcast at night, they are entertainment programmes, the interpretation is carried out by hearing people who use sign language and that their image is displayed in a corner of the screen.
Resumo:
Deaf people are perceived by hearing people as living in a silent world. Yet, silence cannot exist without sound, so if sound is not heard, can there be silence? From a linguistic point of view silence is the absence of, or intermission in, communication. Silence can be communicative or noncommunicative. Thus, silence must exist in sign languages as well. Sign languages are based on visual perception and production through movement and sight. Silence must, therefore, be visually perceptible; and, if there is such a thing as visual silence, how does it look? The paper will analyse the topic of silence from a Deaf perspective. The main aspects to be explored are the perception and evaluation of acoustic noise and silence by Deaf people; the conceptualisation of silence in visual languages, such as sign languages; the qualities of visual silence; the meaning of silence as absence of communication (particularly between hearing and Deaf people); social rules for silence; and silencing strategies.