483 resultados para grafana,SEPA,Plugin,RDF,SPARQL


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a dataset componsed of domain-specific sentiment lexicons in six languages for two domains. We used existing collections of reviews from Trip Advisor, Amazon, the Stanford Network Analysis Project and the OpinRank Review Dataset. We use an RDF model based on the lemon and Marl formats to represent the lexicons. We describe the methodology that we applied to generate the domain-specific lexicons and we provide access information to our datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Extracting opinions and emotions from text is becoming increasingly important, especially since the advent of micro-blogging and social networking. Opinion mining is particularly popular and now gathers many public services, datasets and lexical resources. Unfortunately, there are few available lexical and semantic resources for emotion recognition that could foster the development of new emotion aware services and applications. The diversity of theories of emotion and the absence of a common vocabulary are two of the main barriers to the development of such resources. This situation motivated the creation of Onyx, a semantic vocabulary of emotions with a focus on lexical resources and emotion analysis services. It follows a linguistic Linked Data approach, it is aligned with the Provenance Ontology, and it has been integrated with the Lexicon Model for Ontologies (lemon), a popular RDF model for representing lexical entries. This approach also means a new and interesting way to work with different theories of emotion. As part of this work, Onyx has been aligned with EmotionML and WordNet-Affect.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En los últimos años ha habido un gran aumento de fuentes de datos biomédicos. La aparición de nuevas técnicas de extracción de datos genómicos y generación de bases de datos que contienen esta información ha creado la necesidad de guardarla para poder acceder a ella y trabajar con los datos que esta contiene. La información contenida en las investigaciones del campo biomédico se guarda en bases de datos. Esto se debe a que las bases de datos permiten almacenar y manejar datos de una manera simple y rápida. Dentro de las bases de datos existen una gran variedad de formatos, como pueden ser bases de datos en Excel, CSV o RDF entre otros. Actualmente, estas investigaciones se basan en el análisis de datos, para a partir de ellos, buscar correlaciones que permitan inferir, por ejemplo, tratamientos nuevos o terapias más efectivas para una determinada enfermedad o dolencia. El volumen de datos que se maneja en ellas es muy grande y dispar, lo que hace que sea necesario el desarrollo de métodos automáticos de integración y homogeneización de los datos heterogéneos. El proyecto europeo p-medicine (FP7-ICT-2009-270089) tiene como objetivo asistir a los investigadores médicos, en este caso de investigaciones relacionadas con el cáncer, proveyéndoles con nuevas herramientas para el manejo de datos y generación de nuevo conocimiento a partir del análisis de los datos gestionados. La ingestión de datos en la plataforma de p-medicine, y el procesamiento de los mismos con los métodos proporcionados, buscan generar nuevos modelos para la toma de decisiones clínicas. Dentro de este proyecto existen diversas herramientas para integración de datos heterogéneos, diseño y gestión de ensayos clínicos, simulación y visualización de tumores y análisis estadístico de datos. Precisamente en el ámbito de la integración de datos heterogéneos surge la necesidad de añadir información externa al sistema proveniente de bases de datos públicas, así como relacionarla con la ya existente mediante técnicas de integración semántica. Para resolver esta necesidad se ha creado una herramienta, llamada Term Searcher, que permite hacer este proceso de una manera semiautomática. En el trabajo aquí expuesto se describe el desarrollo y los algoritmos creados para su correcto funcionamiento. Esta herramienta ofrece nuevas funcionalidades que no existían dentro del proyecto para la adición de nuevos datos provenientes de fuentes públicas y su integración semántica con datos privados.---ABSTRACT---Over the last few years, there has been a huge growth of biomedical data sources. The emergence of new techniques of genomic data generation and data base generation that contain this information, has created the need of storing it in order to access and work with its data. The information employed in the biomedical research field is stored in databases. This is due to the capability of databases to allow storing and managing data in a quick and simple way. Within databases there is a variety of formats, such as Excel, CSV or RDF. Currently, these biomedical investigations are based on data analysis, which lead to the discovery of correlations that allow inferring, for example, new treatments or more effective therapies for a specific disease or ailment. The volume of data handled in them is very large and dissimilar, which leads to the need of developing new methods for automatically integrating and homogenizing the heterogeneous data. The p-medicine (FP7-ICT-2009-270089) European project aims to assist medical researchers, in this case related to cancer research, providing them with new tools for managing and creating new knowledge from the analysis of the managed data. The ingestion of data into the platform and its subsequent processing with the provided tools aims to enable the generation of new models to assist in clinical decision support processes. Inside this project, there exist different tools related to areas such as the integration of heterogeneous data, the design and management of clinical trials, simulation and visualization of tumors and statistical data analysis. Particularly in the field of heterogeneous data integration, there is a need to add external information from public databases, and relate it to the existing ones through semantic integration methods. To solve this need a tool has been created: the term Searcher. This tool aims to make this process in a semiautomatic way. This work describes the development of this tool and the algorithms employed in its operation. This new tool provides new functionalities that did not exist inside the p-medicine project for adding new data from public databases and semantically integrate them with private data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this project is to create a website which is useful both employees and students of a university, so employees can add information, if they log in with username and password access, and students can view this information . Employees may modify and display information such as title, room, or their faculty (from a list defined by the administrator), and most importantly, their schedule, whether class, tutoring, free time, or any of tasks that the administrator define. There will be a manager, responsible for managing employees, the availables faculties and the types of tasks that employees can use on their schedule. Students may see the employees schedules and rooms on the homepage. They differentiate between differents tasks of employees, because these are in different colors. They can also filter information for faculty, employee or day. To achieve our goal, we decided to program in Java using Servlets, which we will use to generate response to requests coming from users from the website. We will also use JSP, allowing us to create different websites files. We use JSP files and not HTML, because we need that the pages are dynamic, since not only want to show specific information, we like that information can change depending on user requests. The JSP file allows us to generate HTML, but also using JAVA language, which is necessary for our purpose. As the information we store is not fixed. We want this information can be modified at any time by employees and admin, so we need a database, which can be accessed from anywhere. We decided SQLite databases because are integrated quite well in our application, and offer a quick response. To access the database from our program, we simply connect it to the database, and with very few lines of code, add, delete or modify entries in different tables that owns the database. To facilitate the initial creation of the database, and the first tables, we use a Mozilla Firefox browser plugin, called SQLite Manager, which allows us to do so from a more friendly interface. Finally, we need a server that supports and implements specifications Servlets and JSP. We decided on the TomCat server, which is a container Servlets, because is free, easy to use, and compatible with our program. We realized all the project with Eclipse environment, also free program that allows integrating database, server and program the JSP and Servlets. Once submitted all the tools we used, we must first organize the structure of the web, relating each Servlets with JSP files. Next, create the database and the different Servlets, and adjust the database accesses to make sure we do it right. From here simply is to build up the page step by step, showing in each place we need, and redirect to different pages. In this way, we can build a complex website, free, and without being an expert in the field. RESUMEN. El objetivo de este proyecto, es crear una página web que sirva tanto a empleados como a alumnos de una universidad, de tal manera que los empleados podrán añadir información, mediante el acceso con usuario y contraseña, y los alumnos podrán visualizar está información. Los empleados podrán modificar y mostrar información como su título, despacho, facultad a la que pertenecen (de entre una lista definida por el administrador), y lo más importante, sus horarios, ya sean de clase, tutorías, tiempo libre, o cualquiera de las tareas que el administrador defina. Habrá un administrador, encargado de gestionar los empleados existentes, las facultades disponibles y los tipos de tareas que podrán usar los empleados en su horario. Los alumnos, podrán visualizar los horarios y despacho de los empleados en la página principal. Diferenciarán entre las distintas tareas de los profesores, porque estas se encuentran en colores diferentes. Además, podrán filtrar la información, por facultad, empleado o día de la semana. Para conseguir nuestro objetivo, hemos decidido programar en Java, mediante el uso de Servlets, los cuales usaremos para generar respuesta antes las peticiones que llegan de los usuarios desde la página web. También usaremos archivos JSP, que nos permitirán crear las diferentes páginas webs. Usamos archivos JSP y no HTML, porque necesitamos que las diferentes páginas sean dinámicas, ya que no solo queremos mostrar una información concreta, si no que esta información puede variar en función de las peticiones de usuario. El archivo JSP nos permite generar HTML, pero a la vez usar lenguaje JAVA, algo necesario para nuestro cometido. Como la información que queremos almacenar no es fija, si no que en todo momento debe poder ser modificada por empleados y administrador, necesitamos una base de datos, a la que podamos acceder desde la web. Nos hemos decidido por bases SQLite, ya que se integran bastante bien en nuestra aplicación, y además ofrecen una rápida respuesta. Para acceder a la base de datos desde nuestro programa, simplemente debemos conectar el mismo a la base de datos, y con muy pocas líneas de código, añadir, eliminar o modificar entradas de las diferentes tablas que posee la base de datos. Para facilitar la creación inicial de la base de datos, y de las primeras tablas, usamos un complemento del navegador Mozilla Firefox, llamado SQLite Manager, que nos permite hacerlo desde una interfaz más amigable. Por último, necesitamos de un servidor que soporte e implemente las especificaciones de los Servlets y JSP. Nos decidimos por el servidor TomCat, que es un contenedor de Servlets gratuito, de fácil manejo, y compatible con nuestro programa. Todo el desarrollo del proyecto, lo realizamos desde el entorno Eclipse, programa también gratuito, que permite integrar la base de datos, el servidor y programar los JSP y Servlets. Una vez presentadas todas las herramientas que hemos utilizado, primero debemos organizar la estructura de la web, relacionando cada archivo JSP con los Servlets a los que debe acceder. A continuación creamos la base de datos y los diferentes Servlets, y ajustamos bien los accesos a la base de datos para comprobar que lo hacemos correctamente. A partir de aquí, simplemente es ir construyendo la página paso a paso, mostrando en cada lugar lo que necesitemos, y redirigiendo a las diferentes páginas. De esta manera, podremos construir una página web compleja, de manera gratuita, y sin ser un experto en la materia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Acknowledgements We would like to gratefully acknowledge the data provided by SEPA, Iain Malcolm. Mark Speed, Susan Waldron and many MSS staff helped with sample collection and lab analysis. We thank the European Research Council (project GA 335910 VEWA) for funding and are grateful for the constructive comments provided by three anonymous reviewers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Enteropathogenic Escherichia coli (EPEC) causes a characteristic histopathology in intestinal epithelial cells called the attaching and effacing lesion. Although the histopathological lesion is well described the bacterial factors responsible for it are poorly characterized. We have identified four EPEC chromosomal genes whose predicted protein sequences are similar to components of a recently described secretory pathway (type III) responsible for exporting proteins lacking a typical signal sequence. We have designated the genes sepA, sepB, sepC, and sepD (sep, for secretion of E. coli proteins). The predicted Sep polypeptides are similar to the Lcr (low calcium response) and Ysc (yersinia secretion) proteins of Yersinia species and the Mxi (membrane expression of invasion plasmid antigens) and Spa (surface presentation of antigens) regions of Shigella flexneri. Culture supernatants of EPEC strain E2348/69 contain several polypeptides ranging in size from 110 kDa to 19 kDa. Proteins of comparable size were recognized by human convalescent serum from a volunteer experimentally infected with strain E2348/69. A sepB mutant of EPEC secreted only the 110-kDa polypeptide and was defective in the formation of attaching and effacing lesions and protein-tyrosine phosphorylation in tissue culture cells. These phenotypes were restored upon complementation with a plasmid carrying an intact sepB gene. These data suggest that the EPEC Sep proteins are components of a type III secretory apparatus necessary for the export of virulence determinants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La Inquisición española siempre ha sido un tema atractivo y polémico como pocos en la Historia de España. La bibliografía existente sobre ella es ingente, al igual que variados son los enfoques con los que investigadores y curiosos se han acercado a ella. Son muchos los temas sobre su historia abordados por los historiadores, tendiendo en ciertos casos a la repetición y la sobresaturación en algunos de ellos. Las víctimas, la tortura, el procedimiento..., son conceptos recurrentes, mientras que otros, también fundamentales para el conocimiento de la institución, han permanecido ignorados sin que se sepa muy bien la razón. Varias de estas cuestiones tienen que ver con el funcionamiento interno de los tribunales de distrito y, más en concreto, con su gestión administrativa. La presente tesis, que se adscribe a una nueva corriente de investigación, la Diplomática inquisitorial, tratará de paliar, en la medida de lo posible, ese vacío historiográfico. El primer objetivo será, por tanto, dar a conocer los principales documentos escriturados en los tribunales de distrito del Santo Oficio durante el desarrollo de sus funciones, lo que se traducirá en un amplio espectro temático. El siguiente gran objetivo no será otro que el de tratar de profundizar en la figura de los secretarios de estos tribunales. Para ello se expondrán sus categorías, funciones, formas de acceder al oficio, remuneraciones, etc. Se intentará trazar un perfil lo más completo posible de ellos en tanto que responsables de buena parte de la gestión administrativa del tribunal. Habiendo estudiado la documentación y a quienes la confeccionaban, será necesario también analizar la importancia de los archivos inquisitoriales en la mencionada gestión, de manera que se indagará acerca de su historia, funciones, organización...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La educación del príncipe en la Edad Moderna era de una importancia capital, ya que a través de la educación del futuro gobernante se intentaba lograr que este se convirtiera en un gobernante ideal, en un príncipe perfecto. Estas ideas provenían desde la Baja Edad Media, cuando se difundieron en enorme cantidad los Espejos de Príncipes o Nortes de Príncipes o similares, que propugnaban la educación del futuro gobernante desde diferentes aspectos, el primero y fundamental era el aspecto religioso, que se debía complementar o ampliar con el cultural y con el caballeresco, pero siempre primándose la educación religiosa. A parte de estos estudios humanísticos, junto con la historia se empiezan a desarrollar o a acercar al príncipe las historias de guerras y estrategias de batallas. Estas historias tienen un claro carácter militarizante, que aprenda y sepa como es la guerra. Además,durante su infancia también tenía lugar otro tipo de aprendizaje, el de las armas y la preparación militar, en el que se le enseñaba a batirse y a manejar diferentes tipos de armas, lo que dotaba al príncipe de agilidad, vigor, fuerza y destreza física. Esta preparación física, junto a la instrucción militar, se le enseñaba a montar a caballo, a cazar, a justar, etc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Car Fluff samples collected from a shredding plant in Italy were classified based on particle size, and three different size fractions were obtained in this way. A comparison between these size fractions and the original light fluff was made from two different points of view: (i) the properties of each size fraction as a fuel were evaluated and (ii) the pollutants evolved when each size fraction was subjected to combustion were studied. The aim was to establish which size fraction would be the most suitable for the purposes of energy recovery. The light fluff analyzed contained up to 50 wt.% fines (particle size < 20 mm). However, its low calorific value and high emissions of polychlorinated dioxins and furans (PCDD/Fs), generated during combustion, make the fines fraction inappropriate for energy recovery, and therefore, landfilling would be the best option. The 50–100 mm fraction exhibited a high calorific value and low PCDD/F emissions were generated when the sample was combusted, making it the most suitable fraction for use as refuse-derived fuel (RDF). Results obtained suggest that removing fines from the original ASR sample would lead to a material product that is more suitable for use as RDF.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mechanical treatments such as shredding or extrusion are applied to municipal solid wastes (MSW) to produce refuse-derived fuels (RDF). In this way, a waste fraction (mainly composed by food waste) is removed and the quality of the fuel is improved. In this research, simultaneous thermal analysis (STA) was used to investigate how different mechanical treatments applied to MSW influence the composition and combustion behaviour of fuel blends produced by combining MSW or RDF with wood in different ratios. Shredding and screening resulted in a more efficient mechanical treatment than extrusion to reduce the chlorine content in a fuel, which would improve its quality. This study revealed that when plastics and food waste are combined in the fuel matrix, the thermal decomposition of the fuels are accelerated. The combination of MSW or RDF and woody materials in a fuel blend has a positive impact on its decomposition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two-sided payment card markets generate costs that have to be distributed among the participating actors. For this purpose, payment card networks set an interchange fee, which is the fee paid by the merchant’s bank to the cardholder’s bank per transaction. While in recent years many antitrust authorities all over the world - including the European Commission - have opened proceedings against card brands in order to verify whether agreements to collectively establish the level of interchange fees are anticompetitive, the Reserve Bank of Australia – as a regulator - has directly tried to address market failures by lowering the level of interchange fees and changing some network rules. The US has followed with new legislation on financial consumer protection, which also intervenes on interchange fees. This has opened a strong debate not only on legitimacy of interchange fees, but also on the appropriateness of different public tools to address such issues. Drawing from economic and legal theories and a comparative analysis of recent case law in the EU and other jurisdictions, this work investigates whether a regulation rather than a purely competition policy approach would be more appropriate in this field, considering in particular, at EU level, all of the competition and regulatory concerns that have arisen from the operation of SEPA with multilateral interchange fees. The paper concludes that a wider regulation approach could address some of the shortcomings of a purely antitrust approach, proving to be highly beneficial to the development of an efficient European single payments area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Viene presentato l’approccio Linked Data, che si serve di descrizioni scritte in linguaggio RDF per rendere espliciti ai calcolatori i legami semantici esistenti tra le risorse che popolano il Web. Si descrive quindi il progetto DBpedia, che si propone di riorganizzare le informazioni disponibili su Wikipedia in formato Linked Data, così da renderle più facilmente consultabili dall’utente e da rendere possibile l’esecuzione di query complesse. Si discute quindi della sfida riguardante l’integrazione di contenuti multimediali (immagini, file audio, video…) su DBpedia e si analizzano tre progetti rivolti in tal senso: Multipedia, DBpedia Commons e IMGpedia. Vengono infine sottolineate l’importanza e le potenzialità legate alla creazione di un Web Semantico.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The international perspectives on these issues are especially valuable in an increasingly connected, but still institutionally and administratively diverse world. The research addressed in several chapters in this volume includes issues around technical standards bodies like EpiDoc and the TEI, engaging with ways these standards are implemented, documented, taught, used in the process of transcribing and annotating texts, and used to generate publications and as the basis for advanced textual or corpus research. Other chapters focus on various aspects of philological research and content creation, including collaborative or community driven efforts, and the issues surrounding editorial oversight, curation, maintenance and sustainability of these resources. Research into the ancient languages and linguistics, in particular Greek, and the language teaching that is a staple of our discipline, are also discussed in several chapters, in particular for ways in which advanced research methods can lead into language technologies and vice versa and ways in which the skills around teaching can be used for public engagement, and vice versa. A common thread through much of the volume is the importance of open access publication or open source development and distribution of texts, materials, tools and standards, both because of the public good provided by such models (circulating materials often already paid for out of the public purse), and the ability to reach non-standard audiences, those who cannot access rich university libraries or afford expensive print volumes. Linked Open Data is another technology that results in wide and free distribution of structured information both within and outside academic circles, and several chapters present academic work that includes ontologies and RDF, either as a direct research output or as essential part of the communication and knowledge representation. Several chapters focus not on the literary and philological side of classics, but on the study of cultural heritage, archaeology, and the material supports on which original textual and artistic material are engraved or otherwise inscribed, addressing both the capture and analysis of artefacts in both 2D and 3D, the representation of data through archaeological standards, and the importance of sharing information and expertise between the several domains both within and without academia that study, record and conserve ancient objects. Almost without exception, the authors reflect on the issues of interdisciplinarity and collaboration, the relationship between their research practice and teaching and/or communication with a wider public, and the importance of the role of the academic researcher in contemporary society and in the context of cutting edge technologies. How research is communicated in a world of instant- access blogging and 140-character micromessaging, and how our expectations of the media affect not only how we publish but how we conduct our research, are questions about which all scholars need to be aware and self-critical.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a framework for pattern-based model evolution approaches in the MDA context. In the framework, users define patterns using a pattern modeling language that is designed to describe software design patterns, and they can use the patterns as rules to evolve their model. In the framework, design model evolution takes place via two steps. The first step is a binding process of selecting a pattern and defining where and how to apply the pattern in the model. The second step is an automatic model transformation that actually evolves the model according to the binding information and the pattern rule. The pattern modeling language is defined in terms of a MOF-based role metamodel, and implemented using an existing modeling framework, EMF, and incorporated as a plugin to the Eclipse modeling environment. The model evolution process is also implemented as an Eclipse plugin. With these two plugins, we provide an integrated framework where defining and validating patterns, and model evolution based on patterns can take place in a single modeling environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main argument of this paper is that Natural Language Processing (NLP) does, and will continue to, underlie the Semantic Web (SW), including its initial construction from unstructured sources like the World Wide Web (WWW), whether its advocates realise this or not. Chiefly, we argue, such NLP activity is the only way up to a defensible notion of meaning at conceptual levels (in the original SW diagram) based on lower level empirical computations over usage. Our aim is definitely not to claim logic-bad, NLP-good in any simple-minded way, but to argue that the SW will be a fascinating interaction of these two methodologies, again like the WWW (which has been basically a field for statistical NLP research) but with deeper content. Only NLP technologies (and chiefly information extraction) will be able to provide the requisite RDF knowledge stores for the SW from existing unstructured text databases in the WWW, and in the vast quantities needed. There is no alternative at this point, since a wholly or mostly hand-crafted SW is also unthinkable, as is a SW built from scratch and without reference to the WWW. We also assume that, whatever the limitations on current SW representational power we have drawn attention to here, the SW will continue to grow in a distributed manner so as to serve the needs of scientists, even if it is not perfect. The WWW has already shown how an imperfect artefact can become indispensable.