928 resultados para American Sign Language


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sign language animations can lead to better accessibility of information and services for people who are deaf and have low literacy skills in spoken/written languages. Due to the distinct word-order, syntax, and lexicon of the sign language from the spoken/written language, many deaf people find it difficult to comprehend the text on a computer screen or captions on a television. Animated characters performing sign language in a comprehensible way could make this information accessible. Facial expressions and other non-manual components play an important role in the naturalness and understandability of these animations. Their coordination to the manual signs is crucial for the interpretation of the signed message. Software to advance the support of facial expressions in generation of sign language animation could make this technology more acceptable for deaf people. In this survey, we discuss the challenges in facial expression synthesis and we compare and critique the state of the art projects on generating facial expressions in sign language animations. Beginning with an overview of facial expressions linguistics, sign language animation technologies, and some background on animating facial expressions, a discussion of the search strategy and criteria used to select the five projects that are the primary focus of this survey follows. This survey continues on to introduce the work from the five projects under consideration. Their contributions are compared in terms of support for specific sign language, categories of facial expressions investigated, focus range in the animation generation, use of annotated corpora, input data or hypothesis for their approach, and other factors. Strengths and drawbacks of individual projects are identified in the perspectives above. This survey concludes with our current research focus in this area and future prospects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deaf people who use sign language are potential users of emerging telecommunications innovations such as videotelephony There has been little research that explores their thoughts and experiences in the use of this technology. In this paper, the experiences of a Deaf person as a research insider in a current telecommunications study are described and issues of researcher-participant relationship, data integrity, interview and interpreter skills, communication and cultural aspects of the participating community and the impact of this type of research are explored.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the advantages of using the World Wide Web (Web) as a resource to teach hearing primary aged children Australian Sign Language (Auslan). There is a trend towards educating signing deaf children in mainstream schools, therefore it is important to teach the hearing children sign language to enable meaningful communication and the formation of social relationships between hearing and deaf students. The authors will compare various methods of teaching sign language with the Web and further describe a selection of the available instructional material. Considerations for designing appropriate sign language teaching material for the Web are discussed particularly in the context of designing content that engages the primary school aged audience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: There are currently no adult mental health outcome measures that have been translated into Australian sign language (Auslan). Without a valid and reliable Auslan outcome measure, empirical research into the efficacy of mental health interventions for sign language users is unattainable. To address this research problem the Outcome Rating Scale (ORS), a measure of general functioning, was translated into Auslan and recorded on to digital video disk for use in clinical settings. The purpose of the present study was therefore to examine the reliability, validity and acceptability of an Auslan version of the ORS (ORS-Auslan).
Method:
The ORS-Auslan was administered to 44 deaf people who use Auslan as their first language and who identify as members of a deaf community (termed ‘Deaf’ people) on their first presentation to a mental health or counselling facility and to 55 Deaf people in the general community. The community sample also completed an Auslan version of the Depression Anxiety Stress Scale-21 (DASS-21).
Results: t-Tests indicated significant differences between the mean scores for the clinical and community sample. Internal consistency was acceptable given the low number of items in the ORS-Auslan. Construct validity was established by significant correlations between total scores on the DASS-21-Auslan and ORS-Auslan. Acceptability of ORS-Auslan was evident in the completion rate of 93% compared with 63% for DASS-21-Auslan.
Conclusions: This is the only Auslan outcome measure available that can be used across a wide variety of mental health and clinical settings. The ORS-Auslan provides mental health clinicians with a reliable and valid, brief measure of general functioning that can significantly distinguish between clinical and non-clinical presentations for members of the Deaf community.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to verify discriminative control by segments of signs in adolescents with deafness who use Brazilian Sign Language (BSL). Four adolescent with bilateral deafness, with 3 years of BSL teaching, saw a video presenting a children's tale in BSL. After showing accurate understanding of the story, participants saw another video of the same story with 12 signs altered in one of their segments (hand configuration, place of articulation, or movement). They apparently did not detect the alterations. However, when the signs were presented in isolation in a matching-to-sample test, they virtually always selected the picture corresponding to the unaltered signs. Three participants selected an unfamiliar picture in 50% or more trials with an altered sign as a sample, showing that they could detect the majority of the altered signs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The TV is a potential midia of communication that affects all social classes and it is available in 98% of Brazilian homes. It has been used as a distributor of educational materials since the 1950s. By 2016 the Open Digital TV (ODTV) in Brazil might cover the entire national territory, replacing the TV with analog signals. Concerns about accessibility for People with Special Needs (PSN) in that media have become more intense since the 1990s worldwide. In Brazil, it is estimated 24.6 million of PSN, 23% of them having some type of hearing loss. Of these, 2.9% are reported as deafs. Brazilian Sign Language (LIBRAS) is considered the first literacy language for deaf people in Brazil. In this context, this paper presents a proposal to facilitate the generation of educational content for ODTV based on two components. One is called SynchrLIBRAS and allows subtitles synchronization in Portuguese and a LIBRAS translator window of videos downloaded from the Web. The second component allows the visualization of this content through the Brazilian System of Digital TV and IPTV - environments that implement the middleware Ginga-NCL. The main focus of this paper is the presentation of the first component: SynchrLIBRAS. This proposal has educational purposes, contributing to teach LIBRAS to people who may collaborate with social inclusion of the deaf people.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES] En este trabajo se presenta el diseño de una herramienta multimedia que traduce a la lengua de signos españolas los mensajes de avisos que puede proporcionar un sistema de megafonía. El objetivo del trabajo es proporcionar una herramienta que mejore la inclusión social de las personas con discapacidades auditivas. Con este propósito, se han seleccionado el entorno y los mensajes de audio habituales en un aeropuerto para desarrollar este proyecto piloto. Por último, los audios se han traducido a lengua de signos españolas sintetizando un avatar usando la técnica de animación de rotoscopía a partir de la grabación en vídeo de un traductor. Los resultados finales han sido evaluados por personas sordas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a preprocessing module for improving the performance of a Spanish into Spanish Sign Language (Lengua de Signos Espanola: LSE) translation system when dealing with sparse training data. This preprocessing module replaces Spanish words with associated tags. The list with Spanish words (vocabulary) and associated tags used by this module is computed automatically considering those signs that show the highest probability of being the translation of every Spanish word. This automatic tag extraction has been compared to a manual strategy achieving almost the same improvement. In this analysis, several alternatives for dealing with non-relevant words have been studied. Non-relevant words are Spanish words not assigned to any sign. The preprocessing module has been incorporated into two well-known statistical translation architectures: a phrase-based system and a Statistical Finite State Transducer (SFST). This system has been developed for a specific application domain: the renewal of Identity Documents and Driver's License. In order to evaluate the system a parallel corpus made up of 4080 Spanish sentences and their LSE translation has been used. The evaluation results revealed a significant performance improvement when including this preprocessing module. In the phrase-based system, the proposed module has given rise to an increase in BLEU (Bilingual Evaluation Understudy) from 73.8% to 81.0% and an increase in the human evaluation score from 0.64 to 0.83. In the case of SFST, BLEU increased from 70.6% to 78.4% and the human evaluation score from 0.65 to 0.82.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes the use of Factored Translation Models (FTMs) for improving a Speech into Sign Language Translation System. These FTMs allow incorporating syntactic-semantic information during the translation process. This new information permits to reduce significantly the translation error rate. This paper also analyses different alternatives for dealing with the non-relevant words. The speech into sign language translation system has been developed and evaluated in a specific application domain: the renewal of Identity Documents and Driver’s License. The translation system uses a phrase-based translation system (Moses). The evaluation results reveal that the BLEU (BiLingual Evaluation Understudy) has improved from 69.1% to 73.9% and the mSER (multiple references Sign Error Rate) has been reduced from 30.6% to 24.8%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a categorization module for improving the performance of a Spanish into Spanish Sign Language (LSE) translation system. This categorization module replaces Spanish words with associated tags. When implementing this module, several alternatives for dealing with non-relevant words have been studied. Non-relevant words are Spanish words not relevant in the translation process. The categorization module has been incorporated into a phrase-based system and a Statistical Finite State Transducer (SFST). The evaluation results reveal that the BLEU has increased from 69.11% to 78.79% for the phrase-based system and from 69.84% to 75.59% for the SFST.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the design, development and field evaluation of a machine translation system from Spanish to Spanish Sign Language (LSE: Lengua de Signos Española). The developed system focuses on helping Deaf people when they want to renew their Driver’s License. The system is made up of a speech recognizer (for decoding the spoken utterance into a word sequence), a natural language translator (for converting a word sequence into a sequence of signs belonging to the sign language), and a 3D avatar animation module (for playing back the signs). For the natural language translator, three technological approaches have been implemented and evaluated: an example-based strategy, a rule-based translation method and a statistical translator. For the final version, the implemented language translator combines all the alternatives into a hierarchical structure. This paper includes a detailed description of the field evaluation. This evaluation was carried out in the Local Traffic Office in Toledo involving real government employees and Deaf people. The evaluation includes objective measurements from the system and subjective information from questionnaires. The paper details the main problems found and a discussion on how to solve them (some of them specific for LSE).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a methodology for developing a speech into sign language translation system considering a user-centered strategy. This method-ology consists of four main steps: analysis of technical and user requirements, data collection, technology adaptation to the new domain, and finally, evalua-tion of the system. The two most demanding tasks are the sign generation and the translation rules generation. Many other aspects can be updated automatical-ly from a parallel corpus that includes sentences (in Spanish and LSE: Lengua de Signos Española) related to the application domain. In this paper, we explain how to apply this methodology in order to develop two translation systems in two specific domains: bus transport information and hotel reception.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a methodology for adapting an advanced communication system for deaf people in a new domain. This methodology is a user-centered design approach consisting of four main steps: requirement analysis, parallel corpus generation, technology adaptation to the new domain, and finally, system evaluation. In this paper, the new considered domain has been the dialogues in a hotel reception. With this methodology, it was possible to develop the system in a few months, obtaining very good performance: good speech recognition and translation rates (around 90%) with small processing times.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the application of language translation technologies for generating bus information in Spanish Sign Language (LSE: Lengua de Signos Española). In this work, two main systems have been developed: the first for translating text messages from information panels and the second for translating spoken Spanish into natural conversations at the information point of the bus company. Both systems are made up of a natural language translator (for converting a word sentence into a sequence of LSE signs), and a 3D avatar animation module (for playing back the signs). For the natural language translator, two technological approaches have been analyzed and integrated: an example-based strategy and a statistical translator. When translating spoken utterances, it is also necessary to incorporate a speech recognizer for decoding the spoken utterance into a word sequence, prior to the language translation module. This paper includes a detailed description of the field evaluation carried out in this domain. This evaluation has been carried out at the customer information office in Madrid involving both real bus company employees and deaf people. The evaluation includes objective measurements from the system and information from questionnaires. In the field evaluation, the whole translation presents an SER (Sign Error Rate) of less than 10% and a BLEU greater than 90%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cerebral organization during sentence processing in English and in American Sign Language (ASL) was characterized by employing functional magnetic resonance imaging (fMRI) at 4 T. Effects of deafness, age of language acquisition, and bilingualism were assessed by comparing results from (i) normally hearing, monolingual, native speakers of English, (ii) congenitally, genetically deaf, native signers of ASL who learned English late and through the visual modality, and (iii) normally hearing bilinguals who were native signers of ASL and speakers of English. All groups, hearing and deaf, processing their native language, English or ASL, displayed strong and repeated activation within classical language areas of the left hemisphere. Deaf subjects reading English did not display activation in these regions. These results suggest that the early acquisition of a natural language is important in the expression of the strong bias for these areas to mediate language, independently of the form of the language. In addition, native signers, hearing and deaf, displayed extensive activation of homologous areas within the right hemisphere, indicating that the specific processing requirements of the language also in part determine the organization of the language systems of the brain.