980 resultados para Meta Data, Semantic Web, Software Maintenance, Software Metrics
Resumo:
This paper describes how MPEG-4 object based video (obv) can be used to allow selected objects to be inserted into the play-out stream to a specific user based on a profile derived for that user. The application scenario described here is for personalized product placement, and considers the value of this application in the current and evolving commercial media distribution market given the huge emphasis media distributors are currently placing on targeted advertising. This level of application of video content requires a sophisticated content description and metadata system (e.g., MPEG-7). The scenario considers the requirement for global libraries to provide the objects to be inserted into the streams. The paper then considers the commercial trading of objects between the libraries, video service providers, advertising agencies and other parties involved in the service. Consequently a brokerage of video objects is proposed based on negotiation and trading using intelligent agents representing the various parties. The proposed Media Brokerage Platform is a multi-agent system structured in two layers. In the top layer, there is a collection of coarse grain agents representing the real world players – the providers and deliverers of media contents and the market regulator profiler – and, in the bottom layer, there is a set of finer grain agents constituting the marketplace – the delegate agents and the market agent. For knowledge representation (domain, strategic and negotiation protocols) we propose a Semantic Web approach based on ontologies. The media components contents should be represented in MPEG-7 and the metadata describing the objects to be traded should follow a specific ontology. The top layer content providers and deliverers are modelled by intelligent autonomous agents that express their will to transact – buy or sell – media components by registering at a service registry. The market regulator profiler creates, according to the selected profile, a market agent, which, in turn, checks the service registry for potential trading partners for a given component and invites them for the marketplace. The subsequent negotiation and actual transaction is performed by delegate agents in accordance with their profiles and the predefined rules of the market.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
The changes introduced into the European Higher Education Area (EHEA) by the Bologna Process, together with renewed pedagogical and methodological practices, have created a new teaching-learning paradigm: Student-Centred Learning. In addition, the last few years have been characterized by the application of Information Technologies, especially the Semantic Web, not only to the teaching-learning process, but also to administrative processes within learning institutions. On one hand, the aim of this study was to present a model for identifying and classifying Competencies and Learning Outcomes and, on the other hand, the computer applications of the information management model were developed, namely a relational Database and an Ontology.
Resumo:
Dissertation to Obtain the Degree of Master in Biomedical Engineering
Resumo:
Hybrid knowledge bases are knowledge bases that combine ontologies with non-monotonic rules, allowing to join the best of both open world ontologies and close world rules. Ontologies shape a good mechanism to share knowledge on theWeb that can be understood by both humans and machines, on the other hand rules can be used, e.g., to encode legal laws or to do a mapping between sources of information. Taking into account the dynamics present today on the Web, it is important for these hybrid knowledge bases to capture all these dynamics and thus adapt themselves. To achieve that, it is necessary to create mechanisms capable of monitoring the information flow present on theWeb. Up to today, there are no such mechanisms that allow for monitoring events and performing modifications of hybrid knowledge bases autonomously. The goal of this thesis is then to create a system that combine these hybrid knowledge bases with reactive rules, aiming to monitor events and perform actions over a knowledge base. To achieve this goal, a reactive system for the SemanticWeb is be developed in a logic-programming based approach accompanied with a language for heterogeneous rule base evolution having as its basis RIF Production Rule Dialect, which is a standard for exchanging rules over theWeb.
Resumo:
In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.
Resumo:
O trabalho proposto tem como objetivo apresentar um projeto de elaboração de um sistema de informação que visa gerir o desenvolvimento de uma ontologia promovida pelo organismo de coordenação da política arquivística para a classificação e avaliação da informação na Administração Central e Local, em Portugal. Entre os produtos propostos a partir do referido sistema, conta-se um website no qual é possível consultar toda a informação contida na ontologia, bem como descarregar versões da mesma, com níveis diferentes de semântica. A referida ontologia foi promovida pelo órgão coordenador da política arquivística portuguesa, e tem por base uma Lista consolidada (LC) de processos de processos de negócio da Administração Pública (AP). Esta lista de natureza incremental e colaborativa foi desenvolvida a partir da identificação de uma macroestrutura representativa das funções exercidas pela AP, a Macroestrutura Funcional (MEF). Concretizado este produto e colocado à disposição da comunidade, na página oficial da Direção-Geral do Livro, dos Arquivos e das Bibliotecas (DGLAB), importa potenciar a sua aplicabilidade. Neste sentido, encontra-se em curso um projeto que envolve os autores, em contexto universitário, orientado para o desenvolvimento de um vocabulário formal, um modelo de dados que represente este conjunto de conceitos referentes aos processos de negócio e aos relacionamentos entre eles. Pretende-se que esta ontologia possa vir a ser disponibilizada em listas ou diretórios de ontologias com mecanismos de pesquisa (bibliotecas de ontologias) de modo a incrementar a sua utilização na web semântica, para além da sua utilização como esquema de classificação em sistemas eletrónicos de gestão de arquivos (SEGA), businesse intelligence systems e sistemas de gestão do conhecimento. Com esta comunicação pretende-se dar a conhecer à comunidade de profissionais um projeto de aplicação transversal para todas as entidades públicas e para as empresas com interesse na área.
Resumo:
Dissertação de mestrado em Engenharia de Sistemas
Resumo:
The recent advances in sequencing technologies have given all microbiology laboratories access to whole genome sequencing. Providing that tools for the automated analysis of sequence data and databases for associated meta-data are developed, whole genome sequencing will become a routine tool for large clinical microbiology laboratories. Indeed, the continuing reduction in sequencing costs and the shortening of the 'time to result' makes it an attractive strategy in both research and diagnostics. Here, we review how high-throughput sequencing is revolutionizing clinical microbiology and the promise that it still holds. We discuss major applications, which include: (i) identification of target DNA sequences and antigens to rapidly develop diagnostic tools; (ii) precise strain identification for epidemiological typing and pathogen monitoring during outbreaks; and (iii) investigation of strain properties, such as the presence of antibiotic resistance or virulence factors. In addition, recent developments in comparative metagenomics and single-cell sequencing offer the prospect of a better understanding of complex microbial communities at the global and individual levels, providing a new perspective for understanding host-pathogen interactions. Being a high-resolution tool, high-throughput sequencing will increasingly influence diagnostics, epidemiology, risk management, and patient care.
Resumo:
Aquest treball fa una revisió de les principals tecnologies implicades en la web semàntica i fa ús d'elles per obtenir una base de dades semàntica de coneixement dins del domini de les operacions quirúrgiques traumatològiques.
Resumo:
Creació d'una wiki semàntica per l'aprenentatge en l'àrea de la cirurgia traumatològica amb OntoWiki i Semantic MediaWiki.
Resumo:
L'objectiu general d'aquest treball és conèixer el concepte de web semàntica i aprofundir-hi, i també entendre els principis bàsics d'algunes de les tecnologies en què recolza: les ontologies, l'XML, l'RDF i l'OWL.
Resumo:
L'objectiu d'aquest projecte és l'estudi de la plagiabilitat dels lliuraments de les Proves d'Avaluació Continuada i pràctiques dels estudiants de la UOC així com l'estudi dels diferents mitjans per evitar-la.
Resumo:
Aquest projecte avalua diferents BD:XML natives per a desar i recuperar documents XML basats en l'estàndard MPEG-7. Es tracta d'un estàndard que proposa un llenguatge per descriure el contingut (metadades) de documents multimèdia, és a dir, àudio i vídeo. El seu format de representació és XML i es basa en un esquema ja predefinit (XMLSchema d'MPEG-7).