284 resultados para Gestural Phonology
Resumo:
Phonological development in hearing children of deaf parents Dr. Diane Lillo-Martin 5/9/2010 The researcher wishes to determine the significance of a unique linguistic environment on the effects of phonological development. The research examines whether 3 hearing children of deaf parents, hereafter referred to as CODAs, have inconsistencies, as compared to children in a typical linguistic environment, in their syllable structure, phonological processes or phonemic inventories. More specifically, the research asks whether their speech is more consistent with children of typical environments or more similar to children with phonological delays or disorders or articulation disorders. After the examination of these three components to a child's phonological development, it can be concluded that the linguistic environment of CODA children does not negatively hinder their phonological language development.
Resumo:
Esta investigación aplicada, mantuvo la línea de trabajo que se viene sosteniendo desde el período 05/07 y pretendió profundizar el sesgo de la pedagogía teatral orientada a interaccionar con aprendizajes lingüísticos, fundamentalmente del discurso oral. Se trabajó con un grupo de muestra acotado, probando una secuencia pedagógico-didáctica que diera resultados que permitieran afirmar que el lenguaje Teatro posibilita, en la escuela, la investigación guiada de “los modos en los que el afuera interviene en la conformación del adentro" y facilita a los alumnos desarrollar su oralidad y gestualidad en el nivel interaccional, impactando positivamente en su desempeño académico y social.
Resumo:
Este proyecto tiene como objetivo el desarrollo de una interfaz MIDI, basada en técnicas de procesamiento digital de la imagen, capaz de controlar diversos parámetros de un software de audio mediante información gestual: el movimiento de las manos. La imagen es capturada por una cámara Kinect comercial y los datos obtenidos por ésta son procesados en tiempo real. La finalidad es convertir la posición de varios puntos de control de nuestro cuerpo en información de control musical MIDI. La interfaz ha sido desarrollada en el lenguaje y entorno de programación Processing, el cual está basado en Java, es de libre distribución y de fácil utilización. El software de audio seleccionado es Ableton Live, versión 8.2.2, elegido porque es útil tanto para la composición musical como para la música en directo, y esto último es la principal utilidad que se le pretende dar a la interfaz. El desarrollo del proyecto se divide en dos bloques principales: el primero, diseño gráfico del controlador, y el segundo, la gestión de la información musical. En el primer apartado se justifica el diseño del controlador, formado por botones virtuales: se explica el funcionamiento y, brevemente, la función de cada botón. Este último tema es tratado en profundidad en el Anexo II: Manual de usuario. En el segundo bloque se explica el camino que realiza la información MIDI desde el procesador gestual hasta el sintetizador musical. Este camino empieza en Processing, desde donde se mandan los mensajes que más tarde son interpretados por el secuenciador seleccionado, Ableton Live. Una vez terminada la explicación con detalle del desarrollo del proyecto se exponen las conclusiones del autor acerca del desarrollo del proyecto, donde se encuentran los pros y los contras a tener en cuenta para poder sacar el máximo provecho en el uso del controlador . En este mismo bloque de la memoria se exponen posibles líneas futuras a desarrollar. Se facilita también un presupuesto, desglosado en costes materiales y de personal. ABSTRACT. The aim of this project is the development of a MIDI interface based on image digital processing techniques, able to control several parameters of an audio software using gestural information, the movement of the hands. The image is captured by a commercial Kinect camera and the data obtained by it are processed in real time. The purpose is to convert the position of various points of our body into MIDI musical control information. The interface has been developed in the Processing programming language and environment which is based on Java, freely available and easy to used. The audio software selected is Ableton Live, version 8.2.2, chosen because it is useful for both music composition and live music, and the latter is the interface main intended utility. The project development is divided into two main blocks: the controller graphic design, and the information management. The first section justifies the controller design, consisting of virtual buttons: it is explained the operation and, briefly, the function of each button. This latter topic is covered in detail in Annex II: user manual. In the second section it is explained the way that the MIDI information makes from the gestural processor to the musical synthesizer. It begins in Processing, from where the messages, that are later interpreted by the selected sequencer, Ableton Live, are sent. Once finished the detailed explanation of the project development, the author conclusions are presented, among which are found the pros and cons to take into account in order to take full advantage in the controller use. In this same block are explained the possible future aspects to develop. It is also provided a budget, broken down into material and personal costs.
Resumo:
En esta Tesis se presentan dos líneas de investigación relacionadas y que contribuyen a las áreas de Interacción Hombre-Tecnología (o Máquina; siglas en inglés: HTI o HMI), lingüística computacional y evaluación de la experiencia del usuario. Las dos líneas en cuestión son el diseño y la evaluación centrada en el usuario de sistemas de Interacción Hombre-Máquina avanzados. En la primera parte de la Tesis (Capítulos 2 a 4) se abordan cuestiones fundamentales del diseño de sistemas HMI avanzados. El Capítulo 2 presenta una panorámica del estado del arte de la investigación en el ámbito de los sistemas conversacionales multimodales, con la que se enmarca el trabajo de investigación presentado en el resto de la Tesis. Los Capítulos 3 y 4 se centran en dos grandes aspectos del diseño de sistemas HMI: un gestor del diálogo generalizado para tratar la Interacción Hombre-Máquina multimodal y sensible al contexto, y el uso de agentes animados personificados (ECAs) para mejorar la robustez del diálogo, respectivamente. El Capítulo 3, sobre gestión del diálogo, aborda el tratamiento de la heterogeneidad de la información proveniente de las modalidades comunicativas y de los sensores externos. En este capítulo se propone, en un nivel de abstracción alto, una arquitectura para la gestión del diálogo con influjos heterogéneos de información, apoyándose en el uso de State Chart XML. En el Capítulo 4 se presenta una contribución a la representación interna de intenciones comunicativas, y su traducción a secuencias de gestos a ejecutar por parte de un ECA, diseñados específicamente para mejorar la robustez en situaciones de diálogo críticas que pueden surgir, por ejemplo, cuando se producen errores de entendimiento en la comunicación entre el usuario humano y la máquina. Se propone, en estas páginas, una extensión del Functional Mark-up Language definido en el marco conceptual SAIBA. Esta extensión permite representar actos comunicativos que realizan intenciones del emisor (la máquina) que no se pretende sean captadas conscientemente por el receptor (el usuario humano), pero con las que se pretende influirle a éste e influir el curso del diálogo. Esto se consigue mediante un objeto llamado Base de Intenciones Comunicativas (en inglés, Communication Intention Base, o CIB). La representación en el CIB de intenciones “no claradas” además de las explícitas permite la construcción de actos comunicativos que realizan simultáneamente varias intenciones comunicativas. En el Capítulo 4 también se describe un sistema experimental para el control remoto (simulado) de un asistente domótico, con autenticación de locutor para dar acceso, y con un ECA en el interfaz de cada una de estas tareas. Se incluye una descripción de las secuencias de comportamiento verbal y no verbal de los ECAs, que fueron diseñados específicamente para determinadas situaciones con objeto de mejorar la robustez del diálogo. Los Capítulos 5 a 7 conforman la parte de la Tesis dedicada a la evaluación. El Capítulo 5 repasa antecedentes relevantes en la literatura de tecnologías de la información en general, y de sistemas de interacción hablada en particular. Los principales antecedentes en el ámbito de la evaluación de la interacción sobre los cuales se ha desarrollado el trabajo presentado en esta Tesis son el Technology Acceptance Model (TAM), la herramienta Subjective Assessment of Speech System Interfaces (SASSI), y la Recomendación P.851 de la ITU-T. En el Capítulo 6 se describen un marco y una metodología de evaluación aplicados a la experiencia del usuario con sistemas HMI multimodales. Se desarrolló con este propósito un novedoso marco de evaluación subjetiva de la calidad de la experiencia del usuario y su relación con la aceptación por parte del mismo de la tecnología HMI (el nombre dado en inglés a este marco es Subjective Quality Evaluation Framework). En este marco se articula una estructura de clases de factores subjetivos relacionados con la satisfacción y aceptación por parte del usuario de la tecnología HMI propuesta. Esta estructura, tal y como se propone en la presente tesis, tiene dos dimensiones ortogonales. Primero se identifican tres grandes clases de parámetros relacionados con la aceptación por parte del usuario: “agradabilidad ” (likeability: aquellos que tienen que ver con la experiencia de uso, sin entrar en valoraciones de utilidad), rechazo (los cuales sólo pueden tener una valencia negativa) y percepción de utilidad. En segundo lugar, este conjunto clases se reproduce para distintos “niveles, o focos, percepción del usuario”. Éstos incluyen, como mínimo, un nivel de valoración global del sistema, niveles correspondientes a las tareas a realizar y objetivos a alcanzar, y un nivel de interfaz (en los casos propuestos en esta tesis, el interfaz es un sistema de diálogo con o sin un ECA). En el Capítulo 7 se presenta una evaluación empírica del sistema descrito en el Capítulo 4. El estudio se apoya en los mencionados antecedentes en la literatura, ampliados con parámetros para el estudio específico de los agentes animados (los ECAs), la auto-evaluación de las emociones de los usuarios, así como determinados factores de rechazo (concretamente, la preocupación por la privacidad y la seguridad). También se evalúa el marco de evaluación subjetiva de la calidad propuesto en el capítulo anterior. Los análisis de factores efectuados revelan una estructura de parámetros muy cercana conceptualmente a la división de clases en utilidad-agradabilidad-rechazo propuesta en dicho marco, resultado que da cierta validez empírica al marco. Análisis basados en regresiones lineales revelan estructuras de dependencias e interrelación entre los parámetros subjetivos y objetivos considerados. El efecto central de mediación, descrito en el Technology Acceptance Model, de la utilidad percibida sobre la relación de dependencia entre la intención de uso y la facilidad de uso percibida, se confirma en el estudio presentado en la presente Tesis. Además, se ha encontrado que esta estructura de relaciones se fortalece, en el estudio concreto presentado en estas páginas, si las variables consideradas se generalizan para cubrir más ampliamente las categorías de agradabilidad y utilidad contempladas en el marco de evaluación subjetiva de calidad. Se ha observado, asimismo, que los factores de rechazo aparecen como un componente propio en los análisis de factores, y además se distinguen por su comportamiento: moderan la relación entre la intención de uso (que es el principal indicador de la aceptación del usuario) y su predictor más fuerte, la utilidad percibida. Se presentan también resultados de menor importancia referentes a los efectos de los ECAs sobre los interfaces de los sistemas de diálogo y sobre los parámetros de percepción y las valoraciones de los usuarios que juegan un papel en conformar su aceptación de la tecnología. A pesar de que se observa un rendimiento de la interacción dialogada ligeramente mejor con ECAs, las opiniones subjetivas son muy similares entre los dos grupos experimentales (uno interactuando con un sistema de diálogo con ECA, y el otro sin ECA). Entre las pequeñas diferencias encontradas entre los dos grupos destacan las siguientes: en el grupo experimental sin ECA (es decir, con interfaz sólo de voz) se observó un efecto más directo de los problemas de diálogo (por ejemplo, errores de reconocimiento) sobre la percepción de robustez, mientras que el grupo con ECA tuvo una respuesta emocional más positiva cuando se producían problemas. Los ECAs parecen generar inicialmente expectativas más elevadas en cuanto a las capacidades del sistema, y los usuarios de este grupo se declaran más seguros de sí mismos en su interacción. Por último, se observan algunos indicios de efectos sociales de los ECAs: la “amigabilidad ” percibida los ECAs estaba correlada con un incremento la preocupación por la seguridad. Asimismo, los usuarios del sistema con ECAs tendían más a culparse a sí mismos, en lugar de culpar al sistema, de los problemas de diálogo que pudieran surgir, mientras que se observó una ligera tendencia opuesta en el caso de los usuarios del sistema con interacción sólo de voz. ABSTRACT This Thesis presents two related lines of research work contributing to the general fields of Human-Technology (or Machine) Interaction (HTI, or HMI), computational linguistics, and user experience evaluation. These two lines are the design and user-focused evaluation of advanced Human-Machine (or Technology) Interaction systems. The first part of the Thesis (Chapters 2 to 4) is centred on advanced HMI system design. Chapter 2 provides a background overview of the state of research in multimodal conversational systems. This sets the stage for the research work presented in the rest of the Thesis. Chapers 3 and 4 focus on two major aspects of HMI design in detail: a generalised dialogue manager for context-aware multimodal HMI, and embodied conversational agents (ECAs, or animated agents) to improve dialogue robustness, respectively. Chapter 3, on dialogue management, deals with how to handle information heterogeneity, both from the communication modalities or from external sensors. A highly abstracted architectural contribution based on State Chart XML is proposed. Chapter 4 presents a contribution for the internal representation of communication intentions and their translation into gestural sequences for an ECA, especially designed to improve robustness in critical dialogue situations such as when miscommunication occurs. We propose an extension of the functionality of Functional Mark-up Language, as envisaged in much of the work in the SAIBA framework. Our extension allows the representation of communication acts that carry intentions that are not for the interlocutor to know of, but which are made to influence him or her as well as the flow of the dialogue itself. This is achieved through a design element we have called the Communication Intention Base. Such r pr s ntation of “non- clar ” int ntions allows th construction of communication acts that carry several communication intentions simultaneously. Also in Chapter 4, an experimental system is described which allows (simulated) remote control to a home automation assistant, with biometric (speaker) authentication to grant access, featuring embodied conversation agents for each of the tasks. The discussion includes a description of the behavioural sequences for the ECAs, which were designed for specific dialogue situations with particular attention given to the objective of improving dialogue robustness. Chapters 5 to 7 form the evaluation part of the Thesis. Chapter 5 reviews evaluation approaches in the literature for information technologies, as well as in particular for speech-based interaction systems, that are useful precedents to the contributions of the present Thesis. The main evaluation precedents on which the work in this Thesis has built are the Technology Acceptance Model (TAM), the Subjective Assessment of Speech System Interfaces (SASSI) tool, and ITU-T Recommendation P.851. Chapter 6 presents the author’s work in establishing an valuation framework and methodology applied to the users’ experience with multimodal HMI systems. A novel user-acceptance Subjective Quality Evaluation Framework was developed by the author specifically for this purpose. A class structure arises from two orthogonal sets of dimensions. First we identify three broad classes of parameters related with user acceptance: likeability factors (those that have to do with the experience of using the system), rejection factors (which can only have a negative valence) and perception of usefulness. Secondly, the class structure is further broken down into several “user perception levels”; at the very least: an overall system-assessment level, task and goal-related levels, and an interface level (e.g., a dialogue system with or without an ECA). An empirical evaluation of the system described in Chapter 4 is presented in Chapter 7. The study was based on the abovementioned precedents in the literature, expanded with categories covering the inclusion of an ECA, the users’ s lf-assessed emotions, and particular rejection factors (privacy and security concerns). The Subjective Quality Evaluation Framework proposed in the previous chapter was also scrutinised. Factor analyses revealed an item structure very much related conceptually to the usefulness-likeability-rejection class division introduced above, thus giving it some empirical weight. Regression-based analysis revealed structures of dependencies, paths of interrelations, between the subjective and objective parameters considered. The central mediation effect, in the Technology Acceptance Model, of perceived usefulness on the dependency relationship of intention-to-use with perceived ease of use was confirmed in this study. Furthermore, the pattern of relationships was stronger for variables covering more broadly the likeability and usefulness categories in the Subjective Quality Evaluation Framework. Rejection factors were found to have a distinct presence as components in factor analyses, as well as distinct behaviour: they were found to moderate the relationship between intention-to-use (the main measure of user acceptance) and its strongest predictor, perceived usefulness. Insights of secondary importance are also given regarding the effect of ECAs on the interface of spoken dialogue systems and the dimensions of user perception and judgement attitude that may have a role in determining user acceptance of the technology. Despite observing slightly better performance values in the case of the system with the ECA, subjective opinions regarding both systems were, overall, very similar. Minor differences between two experimental groups (one interacting with an ECA, the other only through speech) include a more direct effect of dialogue problems (e.g., non-understandings) on perceived dialogue robustness for the voice-only interface test group, and a more positive emotional response for the ECA test group. Our findings further suggest that the ECA generates higher initial expectations, and users seem slightly more confident in their interaction with the ECA than do those without it. Finally, mild evidence of social effects of ECAs was also found: the perceived friendliness of the ECA increased security concerns, and ECA users may tend to blame themselves rather than the system when dialogue problems are encountered, while the opposite may be true for voice-only users.
Resumo:
Three studies investigated the relation between symbolic gestures and words, aiming at discover the neural basis and behavioural features of the lexical semantic processing and integration of the two communicative signals. The first study aimed at determining whether elaboration of communicative signals (symbolic gestures and words) is always accompanied by integration with each other and, if present, this integration can be considered in support of the existence of a same control mechanism. Experiment 1 aimed at determining whether and how gesture is integrated with word. Participants were administered with a semantic priming paradigm with a lexical decision task and pronounced a target word, which was preceded by a meaningful or meaningless prime gesture. When meaningful, the gesture could be either congruent or incongruent with word meaning. Duration of prime presentation (100, 250, 400 ms) randomly varied. Voice spectra, lip kinematics, and time to response were recorded and analyzed. Formant 1 of voice spectra, and mean velocity in lip kinematics increased when the prime was meaningful and congruent with the word, as compared to meaningless gesture. In other words, parameters of voice and movement were magnified by congruence, but this occurred only when prime duration was 250 ms. Time to response to meaningful gesture was shorter in the condition of congruence compared to incongruence. Experiment 2 aimed at determining whether the mechanism of integration of a prime word with a target word is similar to that of a prime gesture with a target word. Formant 1 of the target word increased when word prime was meaningful and congruent, as compared to meaningless congruent prime. Increase was, however, present for whatever prime word duration. In the second study, experiment 3 aimed at determining whether symbolic prime gesture comprehension makes use of motor simulation. Transcranial Magnetic Stimulation was delivered to left primary motor cortex 100, 250, 500 ms after prime gesture presentation. Motor Evoked Potential of First Dorsal Interosseus increased when stimulation occurred 100 ms post-stimulus. Thus, gesture was understood within 100ms and integrated with the target word within 250 ms. Experiment 4 excluded any hand motor simulation in order to comprehend prime word. The effect of the prior presentation of a symbolic gesture on congruent target word processing was investigated in study 3. In experiment 5, symbolic gestures were presented as primes, followed by semantically congruent target word or pseudowords. In this case, lexical-semantic decision was accompanied by a motor simulation at 100ms after the onset of the verbal stimuli. Summing up, the same type of integration with a word was present for both prime gesture and word. It was probably subsequent to understanding of the signal, which used motor simulation for gesture and direct access to semantics for words. However, gesture and words could be understood at the same motor level through simulation if words were preceded by an adequate gestural context. Results are discussed in the prospective of a continuum between transitive actions and emblems, in parallelism with language; the grounded/symbolic content of the different signals evidences relation between sensorimotor and linguistic systems, which could interact at different levels.
Resumo:
Introdução: Crianças com distúrbio específico de linguagem (DEL) são propensas a apresentar dificuldade no processo de alfabetização devido às múltiplas alterações de linguagem que possuem. Este estudo comparou e caracterizou o desempenho de crianças com DEL e em desenvolvimento típico de linguagem em atividades de aliteração, rima, memória de curto prazo fonológica, ditado de palavras e de pseudopalavras. A principal hipótese do estudo era de que o grupo DEL apresentaria desempenho inferior do que o grupo em desenvolvimento típico em todas as habilidades estudadas. Método: Participaram do estudo 12 crianças com DEL (GP) e 48 em desenvolvimento típico (GC) com idade entre 7 anos e 9 anos e 11 meses. Todos os sujeitos cursavam o 2º ou 3º ano do ensino fundamental I e apresentavam audição e rendimento intelectual não-verbal preservados. Para a seleção dos grupos foram utilizadas medidas de vocabulário receptivo, fonologia e nível socioeconômico. Já as medidas experimentais avaliadas foram testes padronizados de aliteração, rima, memória de curto prazo fonológica e a aplicação de um ditado de palavras e de pseudopalavras elaborados para esta pesquisa. Resultados: ambos os grupos apresentaram pior desempenho em tarefas de rima do que de aliteração e o GP apresentou desempenho inferior em ambas as tarefas quando comparado ao GC. A análise dos distratores nas atividades de aliteração e rima apontou que em tarefas de aliteração, o GP cometeu mais erros de tipologia semântico enquanto na prova de rima foram mais erros de tipologia fonológico. O GP obteve desempenho inferior ao GC nas avaliações da memória de curto prazo fonológica, ditado de palavras e de pseudopalavras. O GP evidenciou maior dificuldade no ditado de pseudopalavras no que no de palavras e o GC não apresentou diferença significativa no desempenho dos ditados. No ditado de palavras, o GP cometeu mais erros na palavra toda enquanto no ditado de pseudopalavras ocorreram mais erros na palavra toda e na sílaba final. Na comparação do desempenho dos grupos de acordo com a escolaridade, notou-se que os sujeitos do GC do 2º e 3º ano não evidenciaram diferença significativa em seu desempenho nas tarefas, enquanto os sujeitos do GP do 3º ano apresentaram melhor desempenho do que os do 2º ano em todas as medidas experimentais, com exceção da memória de curto prazo fonológica. Conclusões: o GP apresentou dificuldade em tarefas de processamento fonológico e de escrita que foram realizadas com relativa facilidade pelo GC. Os sujeitos com DEL evidenciaram uma análise mais global dos estímulos apresentados nas tarefas de consciência fonológica, o que os fez desprezar aspectos segmentais importantes. A dificuldade em abordar as informações de modo analítico, somado a alterações linguísticas e do processamento fonológico, levou o GP a apresentar maior taxa de erros nas tarefas de ditado. Apesar das alterações apontadas, os sujeitos do GP do 3º ano obtiveram melhor desempenho do que os do 2º ano em todas as habilidades com exceção da memória de curto prazo fonológica, que é sua marca clínica. Estes dados reforçam a necessidade do diagnóstico e intervenção precoces para esta população, onde as habilidades abordadas neste estudo devem ser incluídas no processo terapêutico
Resumo:
"In 1559, Pieter Bruegel the Elder‘s depiction of {7f2015}Netherlandish Proverbs‖ illustrated his profound understanding of the Dutch love for proverbs, their contemporary values, and appreciation for moral lessons in art forms. Depicting gestures and poses that represented proverbial phrases enabled Bruegel‘s leap from didactic labels employed by other artists to his inscription-free success of {7f2015}Netherlandish Proverbs.‖ My examination reveals that Bruegel‘s employment of gestural imagery, indicating rhetorical phrases or proverbs, was reinforced by a history of scholarly curatorship for written proverb collections, humanist interest in proverbs, and use of Dutch vernacular to bolster protonational pride"
Resumo:
BACKGROUND/AIM Gesturing plays an important role in social behavior and social learning. Deficits are frequent in schizophrenia and may contribute to impaired social functioning. Information about deficits during the course of the disease and presence of severity and patterns of impairment in first-episode patients is missing. Hence, we aimed to investigate gesturing in first- compared to multiple-episode schizophrenia patients and healthy controls. METHODS In 14 first-episode patients, 14 multiple-episode patients and 16 healthy controls matched for age, gender and education, gesturing was assessed by the comprehensive Test of Upper Limb Apraxia. Performance in two domains of gesturing - imitation and pantomime - was recorded on video. Raters of gesture performance were blinded. RESULTS Patients with multiple episodes had severe gestural deficits. For almost all gesture categories, performance was worse in multiple- than in first-episode patients. First-episode patients demonstrated subtle deficits with a comparable pattern. CONCLUSIONS Subjects with multiple psychotic episodes have severe deficits in gesturing, while only mild impairments were found in first-episode patients independent of age, gender, education and negative symptoms. The results indicate that gesturing is impaired at the onset of disease and likely to further deteriorate during its course.
Resumo:
A collection of miscellaneous pamphlets on the romance languages.
Resumo:
[1st series]. The Veda, the Avesta, the science of language -- 2nd series. The east and west, religion and mythology, orthography and phonology, Hindu astronomy.
Resumo:
Yazghulami is a South-East Iranian language spoken in the Pamir area of Tajikistan by about 9000 people. This study gives an account of the phonology of the language by describing contrastive segments and their distribution and realizations, as well as describing suprasegmental features such as syllable structure and stress patterns. Field research was carried out in a community of Yazghulami speakers in Dushanbe, the capital of Tajikistan, by recording, transcribing and annotating spoken language. Yazghulami is analyzed as having 8 vowel phonemes of which one pair contrasts in length, and 36 consonant phonemes with a considerable display of palatal, velar and uvular phonemes, of which a set of three labialized plosives and three labialized fricatives is found. The syllable structure of Yazghulami allows for clusters of no more than two consonants in the onset and two in the coda; clusters in both positions do not occur in one and the same syllable. The stress generally falls on the last syllable of a word, although when nouns are inflected with suffixes, the stress instead falls on the last syllable of the stem. With these results, a foundation for further efforts to develop and increase the status of this endangered language is laid.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06