28 resultados para Web log analysis
em Universidad Politécnica de Madrid
Resumo:
There are significant levels of concern about the relevance and the difficulty of learning some issues on Strength of Materials and Structural Analysis. Most students of Continuum Mechanics and Structural Analysis in Civil Engineering usually point out some key learning aspects as especially difficult for acquiring specific skills. These key concepts entail comprehension difficulties but ease access and applicability to structural analysis in more advanced subjects. Likewise, some elusive but basic structural concepts, such as flexibility, stiffness or influence lines, are paramount for developing further skills required for advanced structural design: tall buildings, arch-type structures as well as bridges. As new curricular itineraries are currently being implemented, it appears appropriate to devise a repository of interactive web-based applications for training in those basic concepts. That will hopefully train the student to understand the complexity of such concepts, to develop intuitive knowledge on actual structural response and to improve their preparation for exams. In this work, a web-based learning assistant system for influence lines on continuous beams is presented. It consists of a collection of interactive user-friendly applications accessible via Web. It is performed in both Spanish and English languages. Rather than a “black box” system, the procedure involves open interaction with the student, who can simulate and virtually envisage the structural response. Thus, the student is enabled to set the geometric, topologic and mechanic layout of a continuous beam and to change or shift the loading and the support conditions. Simultaneously, the changes in the beam response prompt on the screen, so that the effects of the several issues involved in structural analysis become apparent. The system is performed through a set of web pages which encompasses interactive exercises and problems, written in JavaScript under JQuery and DyGraphs frameworks, given that their efficiency and graphic capabilities are renowned. Students can freely boost their self-study on this subject in order to face their exams more confidently. Besides, this collection is expected to be added to the "Virtual Lab of Continuum Mechanics" of the UPM, launched in 2013 (http://serviciosgate.upm.es/laboratoriosvirtuales/laboratorios/medios-continuos-en-construcci%C3%B3n)
Resumo:
This thesis is the result of a project whose objective has been to develop and deploy a dashboard for sentiment analysis of football in Twitter based on web components and D3.js. To do so, a visualisation server has been developed in order to present the data obtained from Twitter and analysed with Senpy. This visualisation server has been developed with Polymer web components and D3.js. Data mining has been done with a pipeline between Twitter, Senpy and ElasticSearch. Luigi have been used in this process because helps building complex pipelines of batch jobs, so it has analysed all tweets and stored them in ElasticSearch. To continue, D3.js has been used to create interactive widgets that make data easily accessible, this widgets will allow the user to interact with them and �filter the most interesting data for him. Polymer web components have been used to make this dashboard according to Google's material design and be able to show dynamic data in widgets. As a result, this project will allow an extensive analysis of the social network, pointing out the influence of players and teams and the emotions and sentiments that emerge in a lapse of time.
Resumo:
Globalization has intensified competition, as evidenced by the growing number of international classification systems (rankings) and the attention paid to them. Doctoral education has an international character in itself. It should promote opportunities for graduate students lo participate in these international studies. The quality and competitiveness are two of the most important issues for universities. To promote the interest of graduates to continue their education after the graduate level, it would be necessary to improve the published information of ihe doctoral programs. It should increase the visibility and provide high-quality, easily accessible and comparable information which includes all the relevant aspects of these programs. The authors analysed the website contents of doctoral programs, it was observed a lack of quality of them and very poor information about the contents, so that it was decided that any of them could constitute a model for creating new websites. The recommendations on the format and contents in the web were made by a discussion group. They recommended an attractive design; a page with easy access to contents and easy to find on Ihe net and with the information in more than one language. It should include complete program and academic staff information. It should also be included the study's results which should be easily accessible and includes quantitative data, such as number of students who completed scholars, publications, research projects, average duration of the studies, etc. It will facilitate the choice of program
Resumo:
Semantic Web aims to allow machines to make inferences using the explicit conceptualisations contained in ontologies. By pointing to ontologies, Semantic Web-based applications are able to inter-operate and share common information easily. Nevertheless, multilingual semantic applications are still rare, owing to the fact that most online ontologies are monolingual in English. In order to solve this issue, techniques for ontology localisation and translation are needed. However, traditional machine translation is difficult to apply to ontologies, owing to the fact that ontology labels tend to be quite short in length and linguistically different from the free text paradigm. In this paper, we propose an approach to enhance machine translation of ontologies based on exploiting the well-structured concept descriptions contained in the ontology. In particular, our approach leverages the semantics contained in the ontology by using Cross Lingual Explicit Semantic Analysis (CLESA) for context-based disambiguation in phrase-based Statistical Machine Translation (SMT). The presented work is novel in the sense that application of CLESA in SMT has not been performed earlier to the best of our knowledge.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
Compile-time program analysis techniques can be applied to Web service orchestrations to prove or check various properties. In particular, service orchestrations can be subjected to resource analysis, in which safe approximations of upper and lower resource usage bounds are deduced. A uniform analysis can be simultaneously performed for different generalized resources that can be directiy correlated with cost- and performance-related quality attributes, such as invocations of partners, network traffic, number of activities, iterations, and data accesses. The resulting safe upper and lower bounds do not depend on probabilistic assumptions, and are expressed as functions of size or length of data components from an initiating message, using a finegrained structured data model that corresponds to the XML-style of information structuring. The analysis is performed by transforming a BPEL-like representation of an orchestration into an equivalent program in another programming language for which the appropriate analysis tools already exist.
Resumo:
This work is aimed to present the main differences of nuclear data uncertainties among three different nuclear data libraries: EAF-2007, EAF-2010 and SCALE-6.0, under different neutron spectra: LWR, ADS and DEMO (fusion). To take into account the neutron spectrum, the uncertainty data are collapsed to onegroup. That is a simple way to see the differences among libraries for one application. Also, the neutron spectrum effect on different applications can be observed. These comparisons are presented only for (n,fission), (n,gamma) and (n,p) reactions, for the main transuranic isotopes (234,235,236,238U, 237Np, 238,239,240,241Pu, 241,242m,243Am, 242,243,244,245,246,247,248Cm, 249Bk, 249,250,251,252Cf). But also general comparisons among libraries are presented taking into account all included isotopes. In other works, target accuracies are presented for nuclear data uncertainties; here, these targets are compared with uncertainties on the above libraries. The main results of these comparisons are that EAF-2010 has reduced their uncertainties for many isotopes from EAF-2007 for (n,gamma) and (n,fission) but not for (n,p); SCALE-6.0 gives lower uncertainties for (n,fission) reactions for ADS and PWR applications, but gives higher uncertainties for (n,p) reactions in all applications. For the (n,gamma) reaction, the amount of isotopes which have higher uncertainties is quite similar to the amount of isotopes which have lower uncertainties when SCALE-6.0 and EAF-2010 are compared. When the effect of neutron spectra is analysed, the ADS neutron spectrum obtained the highest uncertainties for (n,gamma) and (n,fission) reactions of all libraries.
Resumo:
Idea Management Systems are an implementation of open innovation notion in the Web environment with the use of crowdsourcing techniques. In this area, one of the popular methods for coping with large amounts of data is duplicate de- tection. With our research, we answer a question if there is room to introduce more relationship types and in what degree would this change affect the amount of idea metadata and its diversity. Furthermore, based on hierarchical dependencies between idea relationships and relationship transitivity we propose a number of methods for dataset summarization. To evaluate our hypotheses we annotate idea datasets with new relationships using the contemporary methods of Idea Management Systems to detect idea similarity. Having datasets with relationship annotations at our disposal, we determine if idea features not related to idea topic (e.g. innovation size) have any relation to how annotators perceive types of idea similarity or dissimilarity.
Resumo:
Providing QoS in the context of Ad Hoc networks includes a very wide field of application from the perspective of every level of the architecture in the network. Saying It in another way, It is possible to speak about QoS when a network is capable of guaranteeing a trustworthy communication in both extremes, between any couple of the network nodes by means of an efficient Management and administration of the resources that allows a suitable differentiation of services in agreement with the characteristics and demands of every single application.The principal objective of this article is the analysis of the quality parameters of service that protocols of routering reagents such as AODV and DSR give in the Ad Hoc mobile Networks; all of this is supported by the simulator ns-2. Here were going to analyze the behavior of some other parameters like effective channel, loss of packages and latency in the protocols of routering. Were going to show you which protocol presents better characteristics of Quality of Service (QoS) in the MANET networks.
Resumo:
The multimedia development that has taken place within the university classrooms in recent years has caused a revolution at psychological level within the collectivity of students and teachers inside and outside the classrooms. The slide show applications have become a key supporting element for university professors, who, in many cases, rely blindly in the use of them for teaching. Additionally, ill-conceived slides, poorly structured and with a vast amount of multimedia content, can be the basis of a faulty communication between teacher and student, which is overwhelmed by the appearance and presentation, neglecting their content. The same applies to web pages. This paper focuses on the study and analysis of the impact caused in the process of teaching and learning by the slide show presentations and web pages, and its positive and negative influence on the student’s learning process, paying particular attention to the consequences on the level of attention within the classroom, and on the study outside the classroom. The study is performed by means of a qualitative analysis of student surveys conducted during the last 8 school Civil Engineering School at the Polytechnic University of Madrid. It presents some of the weaknesses of multimedia material, including the difficulties for students to study them, because of the many distractions they face and the need for incentives web pages offer, or the insignificant content and shallowness of the studies due to wrongly formulated presentations.
Resumo:
A mobile ad hoc network MANET is a collection of wireless mobile nodes that can dynamically configure a network without a fixed infrastructure or centralized administration. This makes it ideal for emergency and rescue scenarios where information sharing is essential and should occur as soon as possible. This article discusses which of the routing strategies for mobile ad hoc networks: proactive, reactive and hierarchical, have a better performance in such scenarios. Using a real urban area being set for the emergency and rescue scenario, we calculate the density of nodes and the mobility model needed for validation. The NS2 simulator has been used in our study. We also show that the hierarchical routing strategies are beffer suited for this type of scenarios.
Resumo:
Este Proyecto Fin de Carrera (PFC) tiene como objetivos el análisis, diseño e implementación de un sistema web que permita a los usuarios familiarizarse con el Índice de Desarrollo Humano (IDH), publicado anualmente por Naciones Unidas, ofreciendo un servicio de gestión y descarga de una aplicación móvil relacionada con dicho índice. La aplicación móvil es un juego educativo basado en preguntas sobre el IDH de los países, desarrollada en paralelo con este proyecto. El servicio web implementado en este proyecto facilita tanto la descarga, administración y actualización de contenidos como la interacción entre los usuarios. El sistema está formado por un servidor web, una base de datos de usuarios y contenidos y un portal web desde el cual puede descargarse la aplicación móvil, realizar consultas sobre estadísticas de juego y conocer el IDH sin necesidad de jugar. El buscador avanzado que ha sido desarrollado para conocer el IDH permite al usuario adquirir destrezas y entrenarse por sí solo para mejorar sus resultados de juego. Los administradores del sistema tienen la capacidad de gestionar el contenido del portal, los usuarios que solicitan darse de alta y la funcionalidad ofrecida, es decir, actualización del juego, foros y noticias. La instalación del sistema implementado en un servidor web ha permitido su verificación exitosa así como la provisión del servicio de información y sensibilización sobre el IDH, actualizado mediante la información de Naciones Unidas, motivación original del proyecto. ABSTRACT This Final Year Project takes as targets the analysis, design and implementation of a web system that allows to the users to familiarize with the Human Development Index (HDI), published annually by United Nations, offering a service of management and download a mobile application associated with that index. The mobile application is an educational game based on questions on the IDH of the countries, developed in parallel with this project. The web service implemented by means of this Project facilitates download, administration and update of contents and the interaction between the users across the cooperative game. The system consists of a web server, a database of users and content and a web portal from which you can download the mobile application, perform queries on game statistics, or discover the HDI without need for play. The advanced search engine that has been developed for the HDI allows the user to purchase and train for skills to improve their game results. System administrators have the ability to manage the content of the portal, users requesting register and the functionality offered, i.e., update to the game, forums and news. The installation of the system that was implemented has allowed successful verification and the provision of an information and awareness on the HDI, updated with the information from the United Nations, original motivation of the project.
Resumo:
Este Proyecto de Fin de Carrera presenta un prototipo de aplicación móvil híbrida multi-plataforma para Android y iOS. Las aplicaciones móviles híbridas son una combinación de aplicaciones web móviles y aplicaciones móviles nativas. Se desarrollan parcialmente con tecnologías web y pueden acceder a la capa nativa y sensores del teléfono. Para el usuario se presentan como aplicaciones nativas, ya que se pueden descargar de las tiendas de aplicaciones y son instaladas en el dispositivo. El prototipo consiste en la migración del módulo de noticias financieras de las aplicaciones actuales para móviles de una compañía bancaria reimplementándolo como aplicación híbrida utilizando uno de los entornos de desarrollo disponibles en el mercado para este propósito. El desarrollo de aplicaciones híbridas puede ahorrar tiempo y dinero cuando se pretende alcanzar más de una plataforma móvil. El objetivo es la evaluación de las ventajas e inconvenientes que ofrece el desarrollo de aplicaciones híbridas en términos de reducción de costes, tiempo de desarrollo y resultado final de la aplicación. El proyecto consta de varias fases. Durante la primera fase se realiza un estudio sobre las aplicaciones híbridas que podemos encontrar hoy en día en el mercado utilizando los ejemplos de linkedIn, Facebook y Financial times. Se hace hincapié en las tecnologías utilizadas, uso de la red móvil y problemas encontrados. Posteriormente se realiza una comparación de distintos entornos de desarrollo multi-plataforma para aplicaciones híbridas en términos de la estrategia utilizada, plataformas soportadas, lenguajes de programación, acceso a capacidades nativas de los dispositivos y licencias de uso. Esta primera fase da como resultado la elección del entorno de desarrollo más adecuado a las exigencias del proyecto, que es PhoneGap, y continua con un análisis más detallado de dicho entorno en cuanto a su arquitectura, características y componentes. La siguiente fase comienza con un estudio de las aplicaciones actuales de la compañía para extraer el código fuente necesario y adaptarlo a la arquitectura que tendrá la aplicación. Para la realización del prototipo se hace uso de la característica que ofrece PhoneGap para acceder a la capa nativa del dispositivo, esto es, el uso de plugins. Se diseña y desarrolla un plugin que permite acceder a la capa nativa para cada plataforma. Una vez desarrollado el prototipo para la plataforma Android, se migra y adapta para la plataforma iOS. Por último se hace una evaluación de los prototipos en cuanto a su facilidad y tiempo de desarrollo, rendimiento, funcionalidad y apariencia de la interfaz de usuario. ABSTRACT. This bachelor's thesis presents a prototype of a hybrid cross-platform mobile application for Android and iOS. Hybrid mobile applications are a combination of mobile web and mobile native applications. They are built partially with web technologies and they can also access native features and sensors of the device. For a user, they look like native applications as they are downloaded from the application stores and installed on the device. This prototype consists of the migration of the financial news module of current mobile applications from a financial bank reimplementing them as a hybrid application using one of the frameworks available in the market for that purpose. Development of applications on a hybrid way can help reducing costs and effort when targeting more than one platform. The target of the project is the evaluation of the advantages and disadvantages that hybrid development can offer in terms of reducing costs and efforts and the final result of the application. The project starts with an analysis of successfully released hybrid applications using the examples of linkedIn, Facebook and Financial Times, emphasizing the different used technologies, the transmitted network data and the encountered problems during the development. This analysis is followed by a comparison of most popular hybrid crossplatform development frameworks in terms of the different approaches, supported platforms, programming languages, access to native features and license. This first stage has the outcome of finding the development framework that best fits to the requirements of the project, that is PhoneGap, and continues with a deeper analysis of its architecture, features and components. Next stage analyzes current company's applications to extract the needed source code and adapt it to the architecture of the prototype. For the realization of the application, the feature that PhoneGap offers to access the native layer of the device is used. This feature is called plugin. A custom plugin is designed and developed to access the native layer of each targeted platform. Once the prototype is finished for Android, it is migrated and adapted to the iOS platform. As a final conclusion the prototypes are evaluated in terms of ease and time of development, performance, functionality and look and feel.
Resumo:
We present direct-drive target design studies for the laser mégajoule using two distinct initial aspect ratios (A = 34 and A = 5). Laser pulse shapes are optimized by a random walk method and drive power variations are used to cover a wide variety of implosion velocities between 260 km/s and 365 km/s. For selected implosion velocities and for each initial aspect ratio, scaled-target families are built in order to find self-ignition threshold. High-gain shock ignition is also investigated in the context of Laser MégaJoule for marginally igniting targets below their own self-ignition threshold.
Resumo:
There are several different standardised and widespread formats to represent emotions. However, there is no standard semantic model yet. This paper presents a new ontology, called Onyx, that aims to become such a standard while adding concepts from the latest Semantic Web models. In particular, the ontology focuses on the representation of Emotion Analysis results. But the model is abstract and inherits from previous standards and formats. It can thus be used as a reference representation of emotions in any future application or ontology. To prove this, we have translated resources from EmotionML representation to Onyx. We also present several ways in which developers could benefit from using this ontology instead of an ad-hoc presentation. Our ultimate goal is to foster the use of semantic technologies for emotion Analysis while following the Linked Data ideals.