967 resultados para Natural language techniques, Semantic spaces, Random projection, Documents
Resumo:
Este trabajo se propone presentar la transformación de la Traductología desde la perspectiva de los Estudios de Género, dos campos del conocimiento que reconocen una impronta multi e interdisciplinaria. En cuanto a la primera, el giro cultural de los ochenta marca el momento en que se incluye a la traducción dentro del conjunto de subsistemas culturales con intereses competitivos y sujetos a las ideologías predominantes (Molina Martínez, 2006: 37). Paralelamente se instala en Canadá un terreno de estudio que vincula los desarrollos transculturales y translingüísticos surgidos de los movimientos feministas de los años setenta con la producción y recepción de textos, temas que son abarcados por la investigación sobre género y traducción. En ese contexto surge la noción de traducción en femenino o reescritura en femenino cuya finalidad es subvertir el lenguaje patriarcal y reivindicar las ideas feministas (Lotbinière-Harwood, 1991). Las estrategias discursivas y textuales utilizadas para resolver los problemas de traducción relacionados con el género "suplementación o compensación, la metatextualidad, el secuestro y el pacto especular" suelen recurrir al empleo de un lenguaje con alteraciones semánticas, neologismos o innovaciones lingüísticas, cuyo propósito es cuestionar la lengua actual y, a su vez, visibilizar la presencia femenina (Castro Vázquez, 2008: 296-298). En esta ponencia, nos detendremos en discutir y ejemplificar las estrategias citadas
Resumo:
En este texto se recupera la historia de los conflictos agrarios de Misiones y de las organizaciones gremiales de los agricultores familiares, colonos, con el propósito de rastrear la existencia de procesos de continuidad con el pasado. El objetivo principal es presentar una categoría mediante la cual se puede comprender un proceso de producción y reproducción de una representación social, política y cultural. En el marco de un abordaje etnográfico, la metodología se basó en la triangulación de datos obtenidos por diferentes técnicas (entrevistas abiertas, entrevistas semiestructuradas, observación participante, revisión de archivos documentales y periodísticos) y la revisión de bibliografía sobre el tema
Resumo:
Este trabajo se propone presentar la transformación de la Traductología desde la perspectiva de los Estudios de Género, dos campos del conocimiento que reconocen una impronta multi e interdisciplinaria. En cuanto a la primera, el giro cultural de los ochenta marca el momento en que se incluye a la traducción dentro del conjunto de subsistemas culturales con intereses competitivos y sujetos a las ideologías predominantes (Molina Martínez, 2006: 37). Paralelamente se instala en Canadá un terreno de estudio que vincula los desarrollos transculturales y translingüísticos surgidos de los movimientos feministas de los años setenta con la producción y recepción de textos, temas que son abarcados por la investigación sobre género y traducción. En ese contexto surge la noción de traducción en femenino o reescritura en femenino cuya finalidad es subvertir el lenguaje patriarcal y reivindicar las ideas feministas (Lotbinière-Harwood, 1991). Las estrategias discursivas y textuales utilizadas para resolver los problemas de traducción relacionados con el género "suplementación o compensación, la metatextualidad, el secuestro y el pacto especular" suelen recurrir al empleo de un lenguaje con alteraciones semánticas, neologismos o innovaciones lingüísticas, cuyo propósito es cuestionar la lengua actual y, a su vez, visibilizar la presencia femenina (Castro Vázquez, 2008: 296-298). En esta ponencia, nos detendremos en discutir y ejemplificar las estrategias citadas
Resumo:
Este artículo explora críticamente la tesis según la cual la lógica formal deductiva contemporánea proporciona métodos e instrumentos para una teoría de la evaluación de argumentos formulados en un lenguaje natural. En este artículo se sostiene que la teoría de la (in)validez de la lógica formal deductiva sólo se puede aplicar a los argumentos del lenguaje natural utilizando aquello que se quiere explicar teóricamente, i.e. Las intuiciones que los/las hablantes de un lenguaje natural tienen acerca de la relaciones de implicación lógica entre las expresiones de esa lengua. Se exploran también algunas consecuencias pedagógicas de esta crítica.
Resumo:
Esta tesina indaga en el ámbito de las Tecnologías de la Información sobre los diferentes desarrollos realizados en la interpretación automática de la semántica de textos y su relación con los Sistemas de Recuperación de Información. Partiendo de una revisión bibliográfica selectiva se busca sistematizar la documentación estableciendo de manera evolutiva los principales antecedentes y técnicas, sintetizando los conceptos fundamentales y resaltando los aspectos que justifican la elección de unos u otros procedimientos en la resolución de los problemas.
Resumo:
Este artículo presenta el Análisis Descriptivo como una estrategia del tratamiento de la información durante el proceso de investigación y su posible uso en estudios de diseño cualitativo. Muchas investigaciones en Ciencias Sociales y Humanas no contemplan la importancia de explicitar los soportes teórico-metodológicos de las inferencias explicativas o interpretación/es a la/s que se arriba, es decir, cómo es que se ha pasado del referente seleccionado (unidad de referencia), al argumento (modelo explicativo o interpretativo) con el que se lo pretende representar. De este modo, se suele ignorar el problema de la representación del referente en un dato tratable y la necesaria transformación del lenguaje natural (LN) en lenguaje descriptivo (LD). Se desarrollan dos ejemplos del campo de la Etología y de la Psicología, aplicando la estrategia metodológica del Análisis Descriptivo. En ellos se demuestra que la codificación que permite realizar este método toma en cuenta por un lado, la base de conocimientos e informaciones relativas a un dominio disciplinar particular y, por otro, permite evidenciar las inferencias seguidas en el razonamiento y las reglas de interpretación utilizadas para arribar a nuevos conocimientos
Resumo:
This paper describes the development of an Advanced Speech Communication System for Deaf People and its field evaluation in a real application domain: the renewal of Driver’s License. The system is composed of two modules. The first one is a Spanish into Spanish Sign Language (LSE: Lengua de Signos Española) translation module made up of a speech recognizer, a natural language translator (for converting a word sequence into a sequence of signs), and a 3D avatar animation module (for playing back the signs). The second module is a Spoken Spanish generator from sign-writing composed of a visual interface (for specifying a sequence of signs), a language translator (for generating the sequence of words in Spanish), and finally, a text to speech converter. For language translation, the system integrates three technologies: an example-based strategy, a rule-based translation method and a statistical translator. This paper also includes a detailed description of the evaluation carried out in the Local Traffic Office in the city of Toledo (Spain) involving real government employees and deaf people. This evaluation includes objective measurements from the system and subjective information from questionnaires. Finally, the paper reports an analysis of the main problems and a discussion about possible solutions.
Resumo:
This article presents a multi-agent expert system (SMAF) , that allows the input of incidents which occur in different elements of the telecommunications area. SMAF interacts with experts and general users, and each agent with all the agents? community, recording the incidents and their solutions in a knowledge base, without the analysis of their causes. The incidents are expressed using keywords taken from natural language (originally Spanish) and their main concepts are recorded with their severities as the users express them. Then, there is a search of the best solution for each incident, being helped by a human operator using a distancenotions between them.
Resumo:
This article describes a knowledge-based method for generating multimedia descriptions that summarize the behavior of dynamic systems. We designed this method for users who monitor the behavior of a dynamic system with the help of sensor networks and make decisions according to prefixed management goals. Our method generates presentations using different modes such as text in natural language, 2D graphics and 3D animations. The method uses a qualitative representation of the dynamic system based on hierarchies of components and causal influences. The method includes an abstraction generator that uses the system representation to find and aggregate relevant data at an appropriate level of abstraction. In addition, the method includes a hierarchical planner to generate a presentation using a model with dis- course patterns. Our method provides an efficient and flexible solution to generate concise and adapted multimedia presentations that summarize thousands of time series. It is general to be adapted to differ- ent dynamic systems with acceptable knowledge acquisition effort by reusing and adapting intuitive rep- resentations. We validated our method and evaluated its practical utility by developing several models for an application that worked in continuous real time operation for more than 1 year, summarizing sen- sor data of a national hydrologic information system in Spain.
Resumo:
Detecting user affect automatically during real-time conversation is the main challenge towards our greater aim of infusing social intelligence into a natural-language mixed-initiative High-Fidelity (Hi-Fi) audio control spoken dialog agent. In recent years, studies on affect detection from voice have moved on to using realistic, non-acted data, which is subtler. However, it is more challenging to perceive subtler emotions and this is demonstrated in tasks such as labelling and machine prediction. This paper attempts to address part of this challenge by considering the role of user satisfaction ratings and also conversational/dialog features in discriminating contentment and frustration, two types of emotions that are known to be prevalent within spoken human-computer interaction. However, given the laboratory constraints, users might be positively biased when rating the system, indirectly making the reliability of the satisfaction data questionable. Machine learning experiments were conducted on two datasets, users and annotators, which were then compared in order to assess the reliability of these datasets. Our results indicated that standard classifiers were significantly more successful in discriminating the abovementioned emotions and their intensities (reflected by user satisfaction ratings) from annotator data than from user data. These results corroborated that: first, satisfaction data could be used directly as an alternative target variable to model affect, and that they could be predicted exclusively by dialog features. Second, these were only true when trying to predict the abovementioned emotions using annotator?s data, suggesting that user bias does exist in a laboratory-led evaluation.
Resumo:
E-learning systems output a huge quantity of data on a learning process. However, it takes a lot of specialist human resources to manually process these data and generate an assessment report. Additionally, for formative assessment, the report should state the attainment level of the learning goals defined by the instructor. This paper describes the use of the granular linguistic model of a phenomenon (GLMP) to model the assessment of the learning process and implement the automated generation of an assessment report. GLMP is based on fuzzy logic and the computational theory of perceptions. This technique is useful for implementing complex assessment criteria using inference systems based on linguistic rules. Apart from the grade, the model also generates a detailed natural language progress report on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. This is illustrated by applying the model to the assessment of Dijkstra’s algorithm learning using a visual simulation-based graph algorithm learning environment, called GRAPHs
Resumo:
In the information society large amounts of information are being generated and transmitted constantly, especially in the most natural way for humans, i.e., natural language. Social networks, blogs, forums, and Q&A sites are a dynamic Large Knowledge Repository. So, Web 2.0 contains structured data but still the largest amount of information is expressed in natural language. Linguistic structures for text recognition enable the extraction of structured information from texts. However, the expressiveness of the current structures is limited as they have been designed with a strict order in their phrases, limiting their applicability to other languages and making them more sensible to grammatical errors. To overcome these limitations, in this paper we present a linguistic structure named ?linguistic schema?, with a richer expressiveness that introduces less implicit constraints over annotations.
Resumo:
We describe the work on infusion of emotion into a limited-task autonomous spoken conversational agent situated in the domestic environment, using a need-inspired task-independent emotion model (NEMO). In order to demonstrate the generation of affect through the use of the model, we describe the work of integrating it with a natural-language mixed-initiative HiFi-control spoken conversational agent (SCA). NEMO and the host system communicate externally, removing the need for the Dialog Manager to be modified, as is done in most existing dialog systems, in order to be adaptive. The first part of the paper concerns the integration between NEMO and the host agent. The second part summarizes the work on automatic affect prediction, namely, frustration and contentment, from dialog features, a non-conventional source, in the attempt of moving towards a more user-centric approach. The final part reports the evaluation results obtained from a user study, in which both versions of the agent (non-adaptive and emotionally-adaptive) were compared. The results provide substantial evidences with respect to the benefits of adding emotion in a spoken conversational agent, especially in mitigating users' frustrations and, ultimately, improving their satisfaction.
Resumo:
Acquired Brain Injury (ABI) has become one of the most common causes of neurological disability in developed countries. Cognitive disorders result in a loss of independence and therefore patients? quality of life. Cognitive rehabilitation aims to promote patients? skills to achieve their highest degree of personal autonomy. New technologies such as interactive video, whereby real situations of daily living are reproduced within a controlled virtual environment, enable the design of personalized therapies with a high level of generalization and a great ecological validity. This paper presents a graphical tool that allows neuropsychologists to design, modify, and configure interactive video therapeutic activities, through the combination of graphic and natural language. The tool has been validated creating several Activities of Daily Living and a preliminary usability evaluation has been performed showing a good clinical acceptance in the definition of complex interactive video therapies for cognitive rehabilitation.
Resumo:
To our knowledge, no current software development methodology explicitly describes how to transit from the analysis model to the software architecture of the application. This paper presents a method to derive the software architecture of a system from its analysis model. To do this, we are going to use MDA. Both the analysis model and the architectural model are PIMs described with UML 2. The model type mapping designed consists of several rules (expressed using OCL and natural language) that, when applied to the analysis artifacts, generate the software architecture of the application. Specifically the rules act on elements of the UML 2 metamodel (metamodel mapping). We have developed a tool (using Smalltalk) that permits the automatic application of these rules to an analysis model defined in RoseTM to generate the application architecture expressed in the architectural style C2.