973 resultados para Text generation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Title: Data-Driven Text Generation using Neural Networks Speaker: Pavlos Vougiouklis, University of Southampton Abstract: Recent work on neural networks shows their great potential at tackling a wide variety of Natural Language Processing (NLP) tasks. This talk will focus on the Natural Language Generation (NLG) problem and, more specifically, on the extend to which neural network language models could be employed for context-sensitive and data-driven text generation. In addition, a neural network architecture for response generation in social media along with the training methods that enable it to capture contextual information and effectively participate in public conversations will be discussed. Speaker Bio: Pavlos Vougiouklis obtained his 5-year Diploma in Electrical and Computer Engineering from the Aristotle University of Thessaloniki in 2013. He was awarded an MSc degree in Software Engineering from the University of Southampton in 2014. In 2015, he joined the Web and Internet Science (WAIS) research group of the University of Southampton and he is currently working towards the acquisition of his PhD degree in the field of Neural Network Approaches for Natural Language Processing. Title: Provenance is Complicated and Boring — Is there a solution? Speaker: Darren Richardson, University of Southampton Abstract: Paper trails, auditing, and accountability — arguably not the sexiest terms in computer science. But then you discover that you've possibly been eating horse-meat, and the importance of provenance becomes almost palpable. Having accepted that we should be creating provenance-enabled systems, the challenge of then communicating that provenance to casual users is not trivial: users should not have to have a detailed working knowledge of your system, and they certainly shouldn't be expected to understand the data model. So how, then, do you give users an insight into the provenance, without having to build a bespoke system for each and every different provenance installation? Speaker Bio: Darren is a final year Computer Science PhD student. He completed his undergraduate degree in Electronic Engineering at Southampton in 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recurso para ayudar a los estudiantes de primero de secundaria a mejorar el nivel en la escritura. Cubre todas las etapas del proceso de escritura y los objetivos señalados para la enseñanza del inglés e incluye, entre otros, la creación de personajes, la escritura de relatos cortos, la escritura de reseñas y el lenguaje del marketing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recurso para ayudar a los estudiantes de segundo de secundaria a aumentar el nivel en la escritura. Cubre todas las etapas del proceso de escritura y los objetivos señalados para la enseñanza del inglés e incluye, entre otros, el lenguaje figurado, la forma poética, narración de cuentos, información, descripción y análisis, la retórica persuasiva y la lógica persuasiva.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recurso para ayudar a los estudiantes de secundaria a aumentar el nivel en la escritura. Cubre todas las etapas del proceso de escritura y los objetivos señalados para la enseñanza del inglés e incluye, entre otros, la narrativa, formas poéticas, literatura de viajes, argumento persuasivo, análisis literario y tipos de texto y parodia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Existe una cantidad enorme de información en Internet acerca de incontables temas, y cada día esta información se expande más y más. En teoría, los programas informáticos podrían beneficiarse de esta gran cantidad de información disponible para establecer nuevas conexiones entre conceptos, pero esta información a menudo aparece en formatos no estructurados como texto en lenguaje natural. Por esta razón, es muy importante conseguir obtener automáticamente información de fuentes de diferentes tipos, procesarla, filtrarla y enriquecerla, para lograr maximizar el conocimiento que podemos obtener de Internet. Este proyecto consta de dos partes diferentes. En la primera se explora el filtrado de información. La entrada del sistema consiste en una serie de tripletas proporcionadas por la Universidad de Coimbra (ellos obtuvieron las tripletas mediante un proceso de extracción de información a partir de texto en lenguaje natural). Sin embargo, debido a la complejidad de la tarea de extracción, algunas de las tripletas son de dudosa calidad y necesitan pasar por un proceso de filtrado. Dadas estas tripletas acerca de un tema concreto, la entrada será estudiada para averiguar qué información es relevante al tema y qué información debe ser descartada. Para ello, la entrada será comparada con una fuente de conocimiento online. En la segunda parte de este proyecto, se explora el enriquecimiento de información. Se emplean diferentes fuentes de texto online escritas en lenguaje natural (en inglés) y se extrae información de ellas que pueda ser relevante al tema especificado. Algunas de estas fuentes de conocimiento están escritas en inglés común, y otras están escritas en inglés simple, un subconjunto controlado del lenguaje que consta de vocabulario reducido y estructuras sintácticas más simples. Se estudia cómo esto afecta a la calidad de las tripletas extraídas, y si la información obtenida de fuentes escritas en inglés simple es de una calidad superior a aquella extraída de fuentes en inglés común.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Site web associé au mémoire: http://daou.st/JSreal

Relevância:

60.00% 60.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La gran cantidad de información disponible en Internet está dificultando cada vez más que los usuarios puedan digerir toda esa información, siendo actualmente casi impensable sin la ayuda de herramientas basadas en las Tecnologías del Lenguaje Humano (TLH), como pueden ser los recuperadores de información o resumidores automáticos. El interés de este proyecto emergente (y por tanto, su objetivo principal) viene motivado precisamente por la necesidad de definir y crear un marco tecnológico basado en TLH, capaz de procesar y anotar semánticamente la información, así como permitir la generación de información de forma automática, flexibilizando el tipo de información a presentar y adaptándola a las necesidades de los usuarios. En este artículo se proporciona una visión general de este proyecto, centrándonos en la arquitectura propuesta y el estado actual del mismo.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A class of twenty-two grade one children was tested to determine their reading levels using the Stanford Diagnostic Reading Achievement Test. Based on these results and teacher input the students were paired according to reading ability. The students ages ranged from six years four months to seven years four months at the commencement of the study. Eleven children were assigned to the language experience group and their partners became the text group. Each member of the language experience group generated a list of eight to be learned words. The treatment consisted of exposing the student to a given word three times per session for ten sessions, over a period of five days. The dependent variables consisted of word identification speed, word identification accuracy, and word recognition accuracy. Each member of the text group followed the same procedure using his/her partner's list of words. Upon completion of this training, the entire process was repeated with members of the text group from the first part becoming members of the language experience group and vice versa. The results suggest that generally speaking language experience words are identified faster than text words but that there is no difference in the rate at which these words are learned. Language experience words may be identified faster because the auditory-semantic information is more readily available in them than in text words. The rate of learning in both types of words, however, may be dictated by the orthography of the to be learned word.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In recent years, Twitter has become one of the most important microblogging services of the Web 2.0. Among the possible uses it allows, it can be employed for communicating and broadcasting information in real time. The goal of this research is to analyze the task of automatic tweet generation from a text summarization perspective in the context of the journalism genre. To achieve this, different state-of-the-art summarizers are selected and employed for producing multi-lingual tweets in two languages (English and Spanish). A wide experimental framework is proposed, comprising the creation of a new corpus, the generation of the automatic tweets, and their assessment through a quantitative and a qualitative evaluation, where informativeness, indicativeness and interest are key criteria that should be ensured in the proposed context. From the results obtained, it was observed that although the original tweets were considered as model tweets with respect to their informativeness, they were not among the most interesting ones from a human viewpoint. Therefore, relying only on these tweets may not be the ideal way to communicate news through Twitter, especially if a more personalized and catchy way of reporting news wants to be performed. In contrast, we showed that recent text summarization techniques may be more appropriate, reflecting a balance between indicativeness and interest, even if their content was different from the tweets delivered by the news providers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the dynamics of disease spread is essential in contexts such as estimating load on medical services, as well as risk assessment and interven- tion policies against large-scale epidemic outbreaks. However, most of the information is available after the outbreak itself, and preemptive assessment is far from trivial. Here, we report on an agent-based model developed to investigate such epidemic events in a stylised urban environment. For most diseases, infection of a new individual may occur from casual contact in crowds as well as from repeated interactions with social partners such as work colleagues or family members. Our model therefore accounts for these two phenomena. Given the scale of the system, efficient parallel computing is required. In this presentation, we focus on aspects related to paralllelisation for large networks generation and massively multi-agent simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The world is rich with information such as signage and maps to assist humans to navigate. We present a method to extract topological spatial information from a generic bitmap floor plan and build a topometric graph that can be used by a mobile robot for tasks such as path planning and guided exploration. The algorithm first detects and extracts text in an image of the floor plan. Using the locations of the extracted text, flood fill is used to find the rooms and hallways. Doors are found by matching SURF features and these form the connections between rooms, which are the edges of the topological graph. Our system is able to automatically detect doors and differentiate between hallways and rooms, which is important for effective navigation. We show that our method can extract a topometric graph from a floor plan and is robust against ambiguous cases most commonly seen in floor plans including elevators and stairwells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research is about jazz in Chile in relation to modernity and identity. Final chapters focus and detach latest jazz musician s generation in 1990 decade and composer guitarist Angel Parra. An historic and sociological approach is developed, which will be useful for modernity and identity analysis, and so on post modernity and globalization. Modernity has been studied in texts of Adorno, Baudrillard, Brünner, García Canclini, Habermas and Jameson. Identity has been studied in texts of Aharonián, Cordúa, Garretón, Gissi, Larraín and others. Chapter 3 is about Latin-American musicology and jazz investigations, in relation to approach developed in chapter 2. Chapters 4 and 5 are about history of jazz in Chile until beginning of XXI century. Chapter 6 focuses in Ángel Parra Orrego. Conclusions of this investigation detach the modernist mechanical that has conducted jazz development in Chile, which in Ángel Parra´s case has been overcame by a post modernist behaviour. This behaviour has solved in a creative way, subjects like modernity and identity in jazz practice in a Latin-American country.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[Excerpt] In response to the longstanding and repeated criticisms that HR does not add value to organizations, the past 10 years has seen a burgeoning of research attempting to demonstrate that progressive HR practices result in higher organizational performance. Huselid’s (1995)groundbreaking study demonstrated that a set of HR practices he referred to as High Performance Work Systems (HPWS) were related to accounting profits and market value of firms. Since then, a number of studies have shown similar positive relationships between HR practices and various measures of firm performance. While the studies comprising what I refer to as “first generation SHRM research” have added to what is becoming a more convincing body of evidence of the positive relationship between HR and performance, this body tends to lack sufficient data to demonstrate that the relationship is actually causal in the sense that HR practices, when instituted, lead to higher performance. This next generation of SHRM research will begin (and, in fact has begun) to focus on designing more rigorous tests of the hypothesis that employing progressive HRM systems actually results in higher organizational performance. This generation of research will focus on two aspects: demonstrating the HRM value chain, and proving causality as opposed to merely covariation.