945 resultados para Sign language phonology


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: There are currently no adult mental health outcome measures that have been translated into Australian sign language (Auslan). Without a valid and reliable Auslan outcome measure, empirical research into the efficacy of mental health interventions for sign language users is unattainable. To address this research problem the Outcome Rating Scale (ORS), a measure of general functioning, was translated into Auslan and recorded on to digital video disk for use in clinical settings. The purpose of the present study was therefore to examine the reliability, validity and acceptability of an Auslan version of the ORS (ORS-Auslan). Method: The ORS-Auslan was administered to 44 deaf people who use Auslan as their first language and who identify as members of a deaf community (termed ‘Deaf’ people) on their first presentation to a mental health or counselling facility and to 55 Deaf people in the general community. The community sample also completed an Auslan version of the Depression Anxiety Stress Scale-21 (DASS-21). Results: t-Tests indicated significant differences between the mean scores for the clinical and community sample. Internal consistency was acceptable given the low number of items in the ORS-Auslan. Construct validity was established by significant correlations between total scores on the DASS-21-Auslan and ORS-Auslan. Acceptability of ORS-Auslan was evident in the completion rate of 93% compared with 63% for DASS-21-Auslan. Conclusions: This is the only Auslan outcome measure available that can be used across a wide variety of mental health and clinical settings. The ORS-Auslan provides mental health clinicians with a reliable and valid, brief measure of general functioning that can significantly distinguish between clinical and non-clinical presentations for members of the Deaf community.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For sign languages used by deaf communities, linguistic corpora have until recently been unavailable, due to the lack of a writing system and a written culture in these communities, and the very recent advent of digital video. Recent improvements in video and computer technology have now made larger sign language datasets possible; however, large sign language datasets that are fully machine-readable are still elusive. This is due to two challenges. 1. Inconsistencies that arise when signs are annotated by means of spoken/written language. 2. The fact that many parts of signed interaction are not necessarily fully composed of lexical signs (equivalent of words), instead consisting of constructions that are less conventionalised. As sign language corpus building progresses, the potential for some standards in annotation is beginning to emerge. But before this project, there were no attempts to standardise these practices across corpora, which is required to be able to compare data crosslinguistically. This project thus had the following aims: 1. To develop annotation standards for glosses (lexical/word level) 2. To test their reliability and validity 3. To improve current software tools that facilitate a reliable workflow Overall the project aimed not only to set a standard for the whole field of sign language studies throughout the world but also to make significant advances toward two of the world’s largest machine-readable datasets for sign languages – specifically the BSL Corpus (British Sign Language, http://bslcorpusproject.org) and the Corpus NGT (Sign Language of the Netherlands, http://www.ru.nl/corpusngt).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the present study was to investigate the functional role of syllables in sign language and how the different phonological combinations influence sign production. Moreover, the influence of age of acquisition was evaluated. Deaf signers (native and non-native) of Catalan Signed Language (LSC) were asked in a picture-sign interference task to sign picture names while ignoring distractor-signs with which they shared two phonological parameters (out of three of the main sign parameters: Location, Movement, and Handshape). The results revealed a different impact of the three phonological combinations. While no effect was observed for the phonological combination Handshape-Location, the combination Handshape-Movement slowed down signing latencies, but only in the non-native group. A facilitatory effect was observed for both groups when pictures and distractors shared Location-Movement. Importantly, linguistic models have considered this phonological combination to be a privileged unit in the composition of signs, as syllables are in spoken languages. Thus, our results support the functional role of syllable units during phonological articulation in sign language production.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An automated system for detection of head movements is described. The goal is to label relevant head gestures in video of American Sign Language (ASL) communication. In the system, a 3D head tracker recovers head rotation and translation parameters from monocular video. Relevant head gestures are then detected by analyzing the length and frequency of the motion signal's peaks and valleys. Each parameter is analyzed independently, due to the fact that a number of relevant head movements in ASL are associated with major changes around one rotational axis. No explicit training of the system is necessary. Currently, the system can detect "head shakes." In experimental evaluation, classification performance is compared against ground-truth labels obtained from ASL linguists. Initial results are promising, as the system matches the linguists' labels in a significant number of cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Locating hands in sign language video is challenging due to a number of factors. Hand appearance varies widely across signers due to anthropometric variations and varying levels of signer proficiency. Video can be captured under varying illumination, camera resolutions, and levels of scene clutter, e.g., high-res video captured in a studio vs. low-res video gathered by a web cam in a user’s home. Moreover, the signers’ clothing varies, e.g., skin-toned clothing vs. contrasting clothing, short-sleeved vs. long-sleeved shirts, etc. In this work, the hand detection problem is addressed in an appearance matching framework. The Histogram of Oriented Gradient (HOG) based matching score function is reformulated to allow non-rigid alignment between pairs of images to account for hand shape variation. The resulting alignment score is used within a Support Vector Machine hand/not-hand classifier for hand detection. The new matching score function yields improved performance (in ROC area and hand detection rate) over the Vocabulary Guided Pyramid Match Kernel (VGPMK) and the traditional, rigid HOG distance on American Sign Language video gestured by expert signers. The proposed match score function is computationally less expensive (for training and testing), has fewer parameters and is less sensitive to parameter settings than VGPMK. The proposed detector works well on test sequences from an inexpert signer in a non-studio setting with cluttered background.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present article analyses the preferences of the deaf who use sign language and are users of the TV interpretation service to sign language, as well as the characteristics with which TV channels provide that service in television in Spain. The objective is to establish whether the way in which the aforementioned accessibility service is provided matches the preferences of users or differ from them. The analysis presents the opinion on this service of the deaf that use the Spanish sign language as their first language for communication. A study has also been conducted on the programmes broadcast with sign language during week 10-16/03/2014. The main data collected reveal that the deaf are dissatisfied with broadcasting times. They ask for news programmes with sign language, they would rather have the interpretation carried out by deaf people who use sign language and they prefer that the interpreter is the main image on screen. Concerning the analysis of the programmes broadcast, the study shows that the majority of programmes with sign language are broadcast at night, they are entertainment programmes, the interpretation is carried out by hearing people who use sign language and that their image is displayed in a corner of the screen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deaf people are perceived by hearing people as living in a silent world. Yet, silence cannot exist without sound, so if sound is not heard, can there be silence? From a linguistic point of view silence is the absence of, or intermission in, communication. Silence can be communicative or noncommunicative. Thus, silence must exist in sign languages as well. Sign languages are based on visual perception and production through movement and sight. Silence must, therefore, be visually perceptible; and, if there is such a thing as visual silence, how does it look? The paper will analyse the topic of silence from a Deaf perspective. The main aspects to be explored are the perception and evaluation of acoustic noise and silence by Deaf people; the conceptualisation of silence in visual languages, such as sign languages; the qualities of visual silence; the meaning of silence as absence of communication (particularly between hearing and Deaf people); social rules for silence; and silencing strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sign language animations can lead to better accessibility of information and services for people who are deaf and have low literacy skills in spoken/written languages. Due to the distinct word-order, syntax, and lexicon of the sign language from the spoken/written language, many deaf people find it difficult to comprehend the text on a computer screen or captions on a television. Animated characters performing sign language in a comprehensible way could make this information accessible. Facial expressions and other non-manual components play an important role in the naturalness and understandability of these animations. Their coordination to the manual signs is crucial for the interpretation of the signed message. Software to advance the support of facial expressions in generation of sign language animation could make this technology more acceptable for deaf people. In this survey, we discuss the challenges in facial expression synthesis and we compare and critique the state of the art projects on generating facial expressions in sign language animations. Beginning with an overview of facial expressions linguistics, sign language animation technologies, and some background on animating facial expressions, a discussion of the search strategy and criteria used to select the five projects that are the primary focus of this survey follows. This survey continues on to introduce the work from the five projects under consideration. Their contributions are compared in terms of support for specific sign language, categories of facial expressions investigated, focus range in the animation generation, use of annotated corpora, input data or hypothesis for their approach, and other factors. Strengths and drawbacks of individual projects are identified in the perspectives above. This survey concludes with our current research focus in this area and future prospects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to verify discriminative control by segments of signs in adolescents with deafness who use Brazilian Sign Language (BSL). Four adolescent with bilateral deafness, with 3 years of BSL teaching, saw a video presenting a children's tale in BSL. After showing accurate understanding of the story, participants saw another video of the same story with 12 signs altered in one of their segments (hand configuration, place of articulation, or movement). They apparently did not detect the alterations. However, when the signs were presented in isolation in a matching-to-sample test, they virtually always selected the picture corresponding to the unaltered signs. Three participants selected an unfamiliar picture in 50% or more trials with an altered sign as a sample, showing that they could detect the majority of the altered signs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The TV is a potential midia of communication that affects all social classes and it is available in 98% of Brazilian homes. It has been used as a distributor of educational materials since the 1950s. By 2016 the Open Digital TV (ODTV) in Brazil might cover the entire national territory, replacing the TV with analog signals. Concerns about accessibility for People with Special Needs (PSN) in that media have become more intense since the 1990s worldwide. In Brazil, it is estimated 24.6 million of PSN, 23% of them having some type of hearing loss. Of these, 2.9% are reported as deafs. Brazilian Sign Language (LIBRAS) is considered the first literacy language for deaf people in Brazil. In this context, this paper presents a proposal to facilitate the generation of educational content for ODTV based on two components. One is called SynchrLIBRAS and allows subtitles synchronization in Portuguese and a LIBRAS translator window of videos downloaded from the Web. The second component allows the visualization of this content through the Brazilian System of Digital TV and IPTV - environments that implement the middleware Ginga-NCL. The main focus of this paper is the presentation of the first component: SynchrLIBRAS. This proposal has educational purposes, contributing to teach LIBRAS to people who may collaborate with social inclusion of the deaf people.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES] En este trabajo se presenta el diseño de una herramienta multimedia que traduce a la lengua de signos españolas los mensajes de avisos que puede proporcionar un sistema de megafonía. El objetivo del trabajo es proporcionar una herramienta que mejore la inclusión social de las personas con discapacidades auditivas. Con este propósito, se han seleccionado el entorno y los mensajes de audio habituales en un aeropuerto para desarrollar este proyecto piloto. Por último, los audios se han traducido a lengua de signos españolas sintetizando un avatar usando la técnica de animación de rotoscopía a partir de la grabación en vídeo de un traductor. Los resultados finales han sido evaluados por personas sordas.