967 resultados para Natural language techniques, Semantic spaces, Random projection, Documents
Resumo:
In this paper we present a dataset componsed of domain-specific sentiment lexicons in six languages for two domains. We used existing collections of reviews from Trip Advisor, Amazon, the Stanford Network Analysis Project and the OpinRank Review Dataset. We use an RDF model based on the lemon and Marl formats to represent the lexicons. We describe the methodology that we applied to generate the domain-specific lexicons and we provide access information to our datasets.
Resumo:
En el trabajo que aquí presentamos se incluye la base teórica (sintaxis y semántica) y una implementación de un framework para codificar el razonamiento de la representación difusa o borrosa del mundo (tal y como nosotros, seres humanos, entendemos éste). El interés en la realización de éste trabajo parte de dos fuentes: eliminar la complejidad existente cuando se realiza una implementación con un lenguaje de programación de los llamados de propósito general y proporcionar una herramienta lo suficientemente inteligente para dar respuestas de forma constructiva a consultas difusas o borrosas. El framework, RFuzzy, permite codificar reglas y consultas en una sintaxis muy cercana al lenguaje natural usado por los seres humanos para expresar sus pensamientos, pero es bastante más que eso. Permite representar conceptos muy interesantes, como fuzzificaciones (funciones usadas para convertir conceptos no difusos en difusos), valores por defecto (que se usan para devolver resultados un poco menos válidos que los que devolveríamos si tuviésemos la información necesaria para calcular los más válidos), similaridad entre atributos (característica que utilizamos para buscar aquellos individuos en la base de datos con una característica similar a la buscada), sinónimos o antónimos y, además, nos permite extender el numero de conectivas y modificadores (incluyendo modificadores de negación) que podemos usar en las reglas y consultas. La personalización de la definición de conceptos difusos (muy útil para lidiar con el carácter subjetivo de los conceptos borrosos, donde nos encontramos con que cualificar a alguien de “alto” depende de la altura de la persona que cualifica) es otra de las facilidades incluida. Además, RFuzzy implementa la semántica multi-adjunta. El interés en esta reside en que introduce la posibilidad de obtener la credibilidad de una regla a partir de un conjunto de datos y una regla dada y no solo el grado de satisfacción de una regla a partir de el universo modelado en nuestro programa. De esa forma podemos obtener automáticamente la credibilidad de una regla para una determinada situación. Aún cuando la contribución teórica de la tesis es interesante en si misma, especialmente la inclusión del modificador de negacion, sus multiples usos practicos lo son también. Entre los diferentes usos que se han dado al framework destacamos el reconocimiento de emociones, el control de robots, el control granular en computacion paralela/distribuída y las busquedas difusas o borrosas en bases de datos. ABSTRACT In this work we provide a theoretical basis (syntax and semantics) and a practical implementation of a framework for encoding the reasoning and the fuzzy representation of the world (as human beings understand it). The interest for this work comes from two sources: removing the existing complexity when doing it with a general purpose programming language (one developed without focusing in providing special constructions for representing fuzzy information) and providing a tool intelligent enough to answer, in a constructive way, expressive queries over conventional data. The framework, RFuzzy, allows to encode rules and queries in a syntax very close to the natural language used by human beings to express their thoughts, but it is more than that. It allows to encode very interesting concepts, as fuzzifications (functions to easily fuzzify crisp concepts), default values (used for providing results less adequate but still valid when the information needed to provide results is missing), similarity between attributes (used to search for individuals with a characteristic similar to the one we are looking for), synonyms or antonyms and it allows to extend the number of connectives and modifiers (even negation) we can use in the rules. The personalization of the definition of fuzzy concepts (very useful for dealing with the subjective character of fuzziness, in which a concept like tall depends on the height of the person performing the query) is another of the facilities included. Besides, RFuzzy implements the multi-adjoint semantics. The interest in them is that in addition to obtaining the grade of satisfaction of a consequent from a rule, its credibility and the grade of satisfaction of the antecedents we can determine from a set of data how much credibility we must assign to a rule to model the behaviour of the set of data. So, we can determine automatically the credibility of a rule for a particular situation. Although the theoretical contribution is interesting by itself, specially the inclusion of the negation modifier, the practical usage of it is equally important. Between the different uses given to the framework we highlight emotion recognition, robocup control, granularity control in parallel/distributed computing and flexible searches in databases.
Resumo:
El concepto de algoritmo es básico en informática, por lo que es crucial que los alumnos profundicen en él desde el inicio de su formación. Por tanto, contar con una herramienta que guíe a los estudiantes en su aprendizaje puede suponer una gran ayuda en su formación. La mayoría de los autores coinciden en que, para determinar la eficacia de una herramienta de visualización de algoritmos, es esencial cómo se utiliza. Así, los estudiantes que participan activamente en la visualización superan claramente a los que la contemplan de forma pasiva. Por ello, pensamos que uno de los mejores ejercicios para un alumno consiste en simular la ejecución del algoritmo que desea aprender mediante el uso de una herramienta de visualización, i. e. consiste en realizar una simulación visual de dicho algoritmo. La primera parte de esta tesis presenta los resultados de una profunda investigación sobre las características que debe reunir una herramienta de ayuda al aprendizaje de algoritmos y conceptos matemáticos para optimizar su efectividad: el conjunto de especificaciones eMathTeacher, además de un entorno de aprendizaje que integra herramientas que las cumplen: GRAPHs. Hemos estudiado cuáles son las cualidades esenciales para potenciar la eficacia de un sistema e-learning de este tipo. Esto nos ha llevado a la definición del concepto eMathTeacher, que se ha materializado en el conjunto de especificaciones eMathTeacher. Una herramienta e-learning cumple las especificaciones eMathTeacher si actúa como un profesor virtual de matemáticas, i. e. si es una herramienta de autoevaluación que ayuda a los alumnos a aprender de forma activa y autónoma conceptos o algoritmos matemáticos, corrigiendo sus errores y proporcionando pistas para encontrar la respuesta correcta, pero sin dársela explícitamente. En estas herramientas, la simulación del algoritmo no continúa hasta que el usuario introduce la respuesta correcta. Para poder reunir en un único entorno una colección de herramientas que cumplan las especificaciones eMathTeacher hemos creado GRAPHs, un entorno ampliable, basado en simulación visual, diseñado para el aprendizaje activo e independiente de los algoritmos de grafos y creado para que en él se integren simuladores de diferentes algoritmos. Además de las opciones de creación y edición del grafo y la visualización de los cambios producidos en él durante la simulación, el entorno incluye corrección paso a paso, animación del pseudocódigo del algoritmo, preguntas emergentes, manejo de las estructuras de datos del algoritmo y creación de un log de interacción en XML. Otro problema que nos planteamos en este trabajo, por su importancia en el proceso de aprendizaje, es el de la evaluación formativa. El uso de ciertos entornos e-learning genera gran cantidad de datos que deben ser interpretados para llegar a una evaluación que no se limite a un recuento de errores. Esto incluye el establecimiento de relaciones entre los datos disponibles y la generación de descripciones lingüísticas que informen al alumno sobre la evolución de su aprendizaje. Hasta ahora sólo un experto humano era capaz de hacer este tipo de evaluación. Nuestro objetivo ha sido crear un modelo computacional que simule el razonamiento del profesor y genere un informe sobre la evolución del aprendizaje que especifique el nivel de logro de cada uno de los objetivos definidos por el profesor. Como resultado del trabajo realizado, la segunda parte de esta tesis presenta el modelo granular lingüístico de la evaluación del aprendizaje, capaz de modelizar la evaluación y generar automáticamente informes de evaluación formativa. Este modelo es una particularización del modelo granular lingüístico de un fenómeno (GLMP), en cuyo desarrollo y formalización colaboramos, basado en la lógica borrosa y en la teoría computacional de las percepciones. Esta técnica, que utiliza sistemas de inferencia basados en reglas lingüísticas y es capaz de implementar criterios de evaluación complejos, se ha aplicado a dos casos: la evaluación, basada en criterios, de logs de interacción generados por GRAPHs y de cuestionarios de Moodle. Como consecuencia, se han implementado, probado y utilizado en el aula sistemas expertos que evalúan ambos tipos de ejercicios. Además de la calificación numérica, los sistemas generan informes de evaluación, en lenguaje natural, sobre los niveles de competencia alcanzados, usando sólo datos objetivos de respuestas correctas e incorrectas. Además, se han desarrollado dos aplicaciones capaces de ser configuradas para implementar los sistemas expertos mencionados. Una procesa los archivos producidos por GRAPHs y la otra, integrable en Moodle, evalúa basándose en los resultados de los cuestionarios. ABSTRACT The concept of algorithm is one of the core subjects in computer science. It is extremely important, then, for students to get a good grasp of this concept from the very start of their training. In this respect, having a tool that helps and shepherds students through the process of learning this concept can make a huge difference to their instruction. Much has been written about how helpful algorithm visualization tools can be. Most authors agree that the most important part of the learning process is how students use the visualization tool. Learners who are actively involved in visualization consistently outperform other learners who view the algorithms passively. Therefore we think that one of the best exercises to learn an algorithm is for the user to simulate the algorithm execution while using a visualization tool, thus performing a visual algorithm simulation. The first part of this thesis presents the eMathTeacher set of requirements together with an eMathTeacher-compliant tool called GRAPHs. For some years, we have been developing a theory about what the key features of an effective e-learning system for teaching mathematical concepts and algorithms are. This led to the definition of eMathTeacher concept, which has materialized in the eMathTeacher set of requirements. An e-learning tool is eMathTeacher compliant if it works as a virtual math trainer. In other words, it has to be an on-line self-assessment tool that helps students to actively and autonomously learn math concepts or algorithms, correcting their mistakes and providing them with clues to find the right answer. In an eMathTeacher-compliant tool, algorithm simulation does not continue until the user enters the correct answer. GRAPHs is an extendible environment designed for active and independent visual simulation-based learning of graph algorithms, set up to integrate tools to help the user simulate the execution of different algorithms. Apart from the options of creating and editing the graph, and visualizing the changes made to the graph during simulation, the environment also includes step-by-step correction, algorithm pseudo-code animation, pop-up questions, data structure handling and XML-based interaction log creation features. On the other hand, assessment is a key part of any learning process. Through the use of e-learning environments huge amounts of data can be output about this process. Nevertheless, this information has to be interpreted and represented in a practical way to arrive at a sound assessment that is not confined to merely counting mistakes. This includes establishing relationships between the available data and also providing instructive linguistic descriptions about learning evolution. Additionally, formative assessment should specify the level of attainment of the learning goals defined by the instructor. Till now, only human experts were capable of making such assessments. While facing this problem, our goal has been to create a computational model that simulates the instructor’s reasoning and generates an enlightening learning evolution report in natural language. The second part of this thesis presents the granular linguistic model of learning assessment to model the assessment of the learning process and implement the automated generation of a formative assessment report. The model is a particularization of the granular linguistic model of a phenomenon (GLMP) paradigm, based on fuzzy logic and the computational theory of perceptions, to the assessment phenomenon. This technique, useful for implementing complex assessment criteria using inference systems based on linguistic rules, has been applied to two particular cases: the assessment of the interaction logs generated by GRAPHs and the criterion-based assessment of Moodle quizzes. As a consequence, several expert systems to assess different algorithm simulations and Moodle quizzes have been implemented, tested and used in the classroom. Apart from the grade, the designed expert systems also generate natural language progress reports on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. In addition, two applications, capable of being configured to implement the expert systems, have been developed. One is geared up to process the files output by GRAPHs and the other one is a Moodle plug-in set up to perform the assessment based on the quizzes results.
Resumo:
El presente trabajo se ha centrado en la investigación de soluciones para automatizar la tarea del enriquecimiento de fuentes de datos sobre redes de sensores con descripciones lingüísticas, con el fin de facilitar la posterior generación de textos en lenguaje natural. El uso de descripciones en lenguaje natural facilita el acceso a los datos a una mayor diversidad de usuarios y, como consecuencia, permite aprovechar mejor las inversiones en redes de sensores. En el trabajo se ha considerado el uso de bases de datos abiertas para abordar la necesidad de disponer de un gran volumen y diversidad de conocimiento geográfico. Se ha analizado también el enriquecimiento de datos dentro de enfoques metodológicos de curación de datos y métodos de generación de lenguaje natural. Como resultado del trabajo, se ha planteado un método general basado en una estrategia de generación y prueba que incluye una forma de representación y uso del conocimiento heurístico con varias etapas de razonamiento para la construcción de descripciones lingüísticas de enriquecimiento de datos. En la evaluación de la propuesta general se han manejado tres escenarios, dos de ellos para generación de referencias geográficas sobre redes de sensores complejas de dimensión real y otro para la generación de referencias temporales. Los resultados de la evaluación han mostrado la validez práctica de la propuesta general exhibiendo mejoras de rendimiento respecto a otros enfoques. Además, el análisis de los resultados ha permitido identificar y cuantificar el impacto previsible de diversas líneas de mejora en bases de datos abiertas. ABSTRACT This work has focused on the search for solutions to automate the task of enrichment sensor-network-based data sources with textual descriptions, so as to facilitate the generation of natural language texts. Using natural language descriptions facilitates data access to a wider range of users and, therefore, allows better leveraging investments in sensor networks. In this work we have considered the use of open databases to address the need for a large volume and diversity of geographical knowledge. We have also analyzed data enrichment in methodological approaches and data curation methods of natural language generation. As a result, it has raised a general method based on a strategy of generating and testing that includes a representation using heuristic knowledge with several stages of reasoning for the construction of linguistic descriptions of data enrichment. In assessing the overall proposal three scenarios have been addressed, two of them in the environmental domain with complex sensor networks and another real dimension in the time domain. The evaluation results have shown the validity and practicality of our proposal, showing performance improvements over other approaches. Furthermore, the analysis of the results has allowed identifying and quantifying the expected impact of various lines of improvement in open databases.
Resumo:
© 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Acknowledgments The authors thank H. H. Nguyen for his early development work on the BeeWatch interface; E. O'Mahony, I. Pearce, and R. Comont for identifying numerous photographed bumblebees; B. Darvill, D. Ewing, and G. Perkins for enabling our partnership with the Bumblebee Conservation Trust; and S. Blake for his investments in developing the NLG feedback. The study was part of the Digital Conservation project of dot.rural, the University of Aberdeen's Digital Economy Research Hub, funded by RCUK (grant reference EP/G066051/1).
Resumo:
As the telecommunications industry evolves over the next decade to provide the products and services that people will desire, several key technologies will become commonplace. Two of these, automatic speech recognition and text-to-speech synthesis, will provide users with more freedom on when, where, and how they access information. While these technologies are currently in their infancy, their capabilities are rapidly increasing and their deployment in today's telephone network is expanding. The economic impact of just one application, the automation of operator services, is well over $100 million per year. Yet there still are many technical challenges that must be resolved before these technologies can be deployed ubiquitously in products and services throughout the worldwide telephone network. These challenges include: (i) High level of accuracy. The technology must be perceived by the user as highly accurate, robust, and reliable. (ii) Easy to use. Speech is only one of several possible input/output modalities for conveying information between a human and a machine, much like a computer terminal or Touch-Tone pad on a telephone. It is not the final product. Therefore, speech technologies must be hidden from the user. That is, the burden of using the technology must be on the technology itself. (iii) Quick prototyping and development of new products and services. The technology must support the creation of new products and services based on speech in an efficient and timely fashion. In this paper I present a vision of the voice-processing industry with a focus on the areas with the broadest base of user penetration: speech recognition, text-to-speech synthesis, natural language processing, and speaker recognition technologies. The current and future applications of these technologies in the telecommunications industry will be examined in terms of their strengths, limitations, and the degree to which user needs have been or have yet to be met. Although noteworthy gains have been made in areas with potentially small user bases and in the more mature speech-coding technologies, these subjects are outside the scope of this paper.
Resumo:
Speech interface technology, which includes automatic speech recognition, synthetic speech, and natural language processing, is beginning to have a significant impact on business and personal computer use. Today, powerful and inexpensive microprocessors and improved algorithms are driving commercial applications in computer command, consumer, data entry, speech-to-text, telephone, and voice verification. Robust speaker-independent recognition systems for command and navigation in personal computers are now available; telephone-based transaction and database inquiry systems using both speech synthesis and recognition are coming into use. Large-vocabulary speech interface systems for document creation and read-aloud proofing are expanding beyond niche markets. Today's applications represent a small preview of a rich future for speech interface technology that will eventually replace keyboards with microphones and loud-speakers to give easy accessibility to increasingly intelligent machines.
Resumo:
Introdução: A esclerose mesial temporal (EMT) é a principal causa de epilepsia resistente ao tratamento medicamentoso. Pacientes com EMT apresentam dificuldades no processamento semântico e fonológico de linguagem e maior incidência de reorganização cerebral da linguagem (bilateral ou à direita) em relação à população geral. A ressonância magnética funcional (RMf) permite avaliar a reorganização cerebral das redes de linguagem, comparando padrões de ativação cerebral entre diversas regiões cerebrais. Objetivo: Investigar o desempenho linguístico de pacientes com EMT unilateral esquerda e direita e a ocorrência de reorganização das redes de linguagem com RMf para avaliar se a reorganização foi benéfica para a linguagem nestes pacientes. Métodos: Utilizamos provas clínicas de linguagem e paradigmas de nomeação visual e responsiva para RMf, desenvolvidos para este estudo. Foram avaliados 24 pacientes com EMTe, 22 pacientes com EMTd e 24 controles saudáveis, submetidos a provas de linguagem (fluência semântica e fonológica, nomeação de objetos, verbos, nomes próprios e responsiva, e compreensão de palavras) e a três paradigmas de linguagem por RMf [nomeação por confrontação visual (NCV), nomeação responsiva à leitura (NRL) e geração de palavras (GP)]. Seis regiões cerebrais de interesse (ROI) foram selecionadas (giro frontal inferior, giro frontal médio, giro frontal superior, giro temporal inferior, giro temporal médio e giro temporal superior). Índices de Lateralidade (ILs) foram calculados com dois métodos: bootstrap, do programa LI-Toolbox, independe de limiar, e PSC, que indica a intensidade da ativação cerebral de cada voxel. Cada grupo de pacientes (EMTe e EMTd) foi dividido em dois subgrupos, de acordo com o desempenho em relação aos controles na avaliação clinica de linguagem. O <= -1,5 foi utilizado como nota de corte para dividir os grupos em pacientes com bom e com mau desempenho de linguagem. Em seguida, comparou-se o desempenho linguístico dos subgrupos ao índices IL-boot. Resultados: Pacientes com EMT esquerda e direita mostraram pior desempenho que controles nas provas clínicas de nomeação de verbos, nomeação de nomes próprios, nomeação responsiva e fluência verbal. Os mapas de ativação cerebral por RMf mostraram efeito BOLD em regiões frontais e temporoparietais de linguagem. Os mapas de comparação de ativação cerebral entre os grupos revelaram que pacientes com EMT esquerda e direita apresentam maior ativação em regiões homólogas do hemisfério direito em relação aos controles. Os ILs corroboraram estes resultados, mostrando valores médios menores para os pacientes em relação aos controles e, portanto, maior simetria na representação da linguagem. A comparação entre o IL-boot e o desempenho nas provas clínicas de linguagem indicou que, no paradigma de nomeação responsiva à leitura, a reorganização funcional no giro temporal médio, e possivelmente, nos giros temporal inferior e superior associou-se a desempenho preservado em provas de nomeação. Conclusão: Pacientes com EMT direita e esquerda apresentam comprometimento de nomeação e fluência verbal e reorganização da rede cerebral de linguagem. A reorganização funcional de linguagem em regiões temporais, especialmente o giro temporal médio associou-se a desempenho preservado em provas de nomeação em pacientes com EMT esquerda no paradigma de RMf de nomeação responsiva à leitura
Resumo:
This dissertation examines ancient historiographic citation methodologies in light of Mikhail Bakhtin’s dichotomy between polyphony and monologization. In particular, this dissertation argues that Eusebius of Caesarea’s Historia ecclesiastica (HE) abandons the monologic citation methodology typical of previous Greek and Hellenistic historiography and introduces a polyphonic citation methodology that influences subsequent late-ancient Christian historiography to varying degrees. Whereas Pre-Eusebian Greek and Hellenistic historiographers typically use citations to support the single authorial consciousness of the historiographer, Eusebius uses citations to counterbalance his own shortcomings as a witness to past events. Eusebius allows his citations to retain their own voice, even when they conflict with his. The result is a narrative that transcends the point of view of any single individual and makes multiple witnesses, including the narrator, available to the reader. Post-Eusebian late-ancient Christian historiographers exhibit the influence of Eusebius’ innovation, but they are not as intentional as Eusebius in their use of citation methodologies. Many subsequent Christian historiographers use both monologic and polyphonic citation methodologies. Their tendency to follow Eusebius’ practice of citing numerous lengthy citations sometimes emphasizes points of view that oppose the author’s point of view. When an opposing viewpoint surfaces in enough citations, a polyphonic citation methodology emerges. The reader holds the two different narrative strands in tension as the author continues to give voice to opposing viewpoints. After illustrating the citation methodologies with passages from numerous Greek, Hellenistic, and late ancient Christian historiographers, this dissertation concludes with a short computational analysis that uses natural language processing to reveal some broad trends that highlight the previous findings and suggest a possibility for future research.
Resumo:
En este artículo presentamos COMPENDIUM, una herramienta de generación de resúmenes de textos modular. Esta herramienta se compone de un módulo central con cinco etapas bien diferenciadas: i) análisis lingüístico; ii) detección de redundancia; iii) identificación del tópico; iv) detección de relevancia; y v) generación del resumen, y una serie de módulos adicionales que permiten incrementar las funcionalidades de la herramienta permitiendo la generación de distintos tipos de resúmenes, como por ejemplo orientados a un tema concreto. Realizamos una evaluación exhaustiva en dos dominios distintos (noticias de prensa y documentos sobre lugares turísticos) y analizamos diferentes tipos de resúmenes generados con COMPENDIUM (mono-documento, multi-documento, genéricos y orientados a un tema). Además, comparamos nuestro sistema con otros sistemas de generación de resúmenes actuales. Los resultados que se obtienen demuestran que la herramienta COMPENDIUM es capaz de generar resúmenes competitivos para los distintos tipos de resúmenes propuestos.
Resumo:
Proyecto emergente centrado en la detección e interpretación de metáforas con métodos no supervisados. Se presenta la caracterización del problema metafórico en Procesamiento del Lenguaje Natural, los fundamentos teóricos del proyecto y los primeros resultados.
Resumo:
This paper shows a system about the recognition of temporal expressions in Spanish and the resolution of their temporal reference. For the identification and recognition of temporal expressions we have based on a temporal expression grammar and for the resolution on an inference engine, where we have the information necessary to do the date operation based on the recognized expressions. For further information treatment, the output is proposed by means of XML tags in order to add standard information of the resolution obtained. Different kinds of annotation of temporal expressions are explained in another articles [WILSON2001][KATZ2001]. In the evaluation of our proposal we have obtained successful results.
Resumo:
Preliminary research demonstrated the EmotiBlog annotated corpus relevance as a Machine Learning resource to detect subjective data. In this paper we compare EmotiBlog with the JRC Quotes corpus in order to check the robustness of its annotation. We concentrate on its coarse-grained labels and carry out a deep Machine Learning experimentation also with the inclusion of lexical resources. The results obtained show a similarity with the ones obtained with the JRC Quotes corpus demonstrating the EmotiBlog validity as a resource for the SA task.
Resumo:
The development of the Web 2.0 led to the birth of new textual genres such as blogs, reviews or forum entries. The increasing number of such texts and the highly diverse topics they discuss make blogs a rich source for analysis. This paper presents a comparative study on open domain and opinion QA systems. A collection of opinion and mixed fact-opinion questions in English is defined and two Question Answering systems are employed to retrieve the answers to these queries. The first one is generic, while the second is specific for emotions. We comparatively evaluate and analyze the systems’ results, concluding that opinion Question Answering requires the use of specific resources and methods.
Resumo:
The exponential growth of the subjective information in the framework of the Web 2.0 has led to the need to create Natural Language Processing tools able to analyse and process such data for multiple practical applications. They require training on specifically annotated corpora, whose level of detail must be fine enough to capture the phenomena involved. This paper presents EmotiBlog – a fine-grained annotation scheme for subjectivity. We show the manner in which it is built and demonstrate the benefits it brings to the systems using it for training, through the experiments we carried out on opinion mining and emotion detection. We employ corpora of different textual genres –a set of annotated reported speech extracted from news articles, the set of news titles annotated with polarity and emotion from the SemEval 2007 (Task 14) and ISEAR, a corpus of real-life self-expressed emotion. We also show how the model built from the EmotiBlog annotations can be enhanced with external resources. The results demonstrate that EmotiBlog, through its structure and annotation paradigm, offers high quality training data for systems dealing both with opinion mining, as well as emotion detection.