943 resultados para SIB Semantic Information Broker OSGI Semantic Web


Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'objectiu fonamental d'aquest treball és estudiar els conceptes bàsics de la web semàntica mitjançant l'estudi de dos sistemes de gestió de bases de dades (SGBD) utilitzats en el context de la web semàntica, OWLIM-Lite i Virtuoso, que ens permetran dissenyar i implementar una ontologia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introducció al LOD. Evolució tecnològica de la última dècada i repàs dels principals conceptes relacionats fins a arribar a la web semàntica i les dades enllaçables en obert. Evolució en l'àmbit bibliotecari i els catàlegs: tendències encaminades cap a la millora de l’accés, el web 2.0 i l’accés a la informació a text complet. Per què i com participar en el moviment LOD. Iniciatives en el món bibliotecari: Què s’està fent a nivell internacional en el món dels catàlegs i la catalogació?. Com seran doncs els catàlegs del futur, i finalment, com es concreta a la pràctica.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La web semàntica ens pot facilitar i agilitzar l'aprenentatge o recerca d'informació a través de les relacions de conceptes que ens aporta gràcies a la utilització de les ontologies. Per a la creació de la nostra ontologia hem utilitzat el programa Protégé. Per al disseny de l'ontologia ens hem basat en les funcionalitats bàsiques del Twitter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introducción a las wikis semánticas: ¿Qué son? ¿Cómo son? ¿Cómo trabajan? ¿Cuándo hacen falta?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este trabajo se describe una base de conocimiento de las ALU humanas. La ontología incorpora términos SO y GO y está orientada a describir el contexto genómico del conjunto de ALU. Para cada elemento ALU se almacenan el gen y transcrito más cercanos, así como su anotación funcional de acuerdo a GO, el estado de la cromatina circundante y los factores de transcripción presentes en la ALU. Se han incorporado reglas semánticas para facilitar el almacenamiento, consulta e integración de la información. La ontología de ALU es plenamente analizable mediante razonadores como Pellet y está parcialmente transferida a una wiki semántica.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present paper advocates for the creation of a federated, hybrid database in the cloud, integrating law data from all available public sources in one single open access system - adding, in the process, relevant meta-data to the indexed documents, including the identification of social and semantic entities and the relationships between them, using linked open data techniques and standards such as RDF. Examples of potential benefits and applications of this approach are also provided, including, among others, experiences from of our previous research, in which data integration, graph databases and social and semantic networks analysis were used to identify power relations, litigation dynamics and cross-references patterns both intra and inter-institutionally, covering most of the World international economic courts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Awareness is required for supporting all forms of cooperation. In Computer Supported Collaborative Learning (CSCL), awareness can be used for enhancing collaborative opportunities across physical distances and in computer-mediated environments. Shared Knowledge Awareness (SKA) intends to increase the perception about the shared knowledge, students have in a collaborative learning scenario and also concerns the understanding that this group has about it. However, it is very difficult to produce accurate awareness indicators based on informal message exchange among the participants. Therefore, we propose a semantic system for cooperation that makes use of formal methods for knowledge representation based on semantic web technologies. From these semantics-enhanced repository and messages, it could be easier to compute more accurate awareness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'ontologia que s'ha dissenyat contempla els conceptes bàsics de Twitter, les relacions entre ells i les restriccions que cal respectar. L'ontologia s'ha dissenyat amb el programa Protégé i està disponible en format OWL. S'ha desenvolupat una aplicació per poblar l'ontologia amb els tweets que s'obtenen a partir d'una cerca a Twitter. L'accés a Twitter es fa via l'API que ofereix per accedir a les dades des d'aplicacions de tercers. El resultat de l'execució de l'aplicació és un fitxer RDF/XML amb les tripletes corresponents a les instàncies dels objectes en l'ontologia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En el present treball hem tractat d'aportar una visió actual del món de les dades obertes enllaçades en l'àmbit de l'educació. Hem revisat tant les aplicacions que van dirigides a implementar aquestes tecnologies en els repositoris de dades existents (pàgines web, repositoris d'objectes educacionals, repositoris de cursos i programes educatius) com a ser suport de nous paradigmes dins del món de l'educació.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työssä tutkittiin IFC (Industrial Foundation Classes)-tietomallin mukaisen tiedoston jäsentämistä, tiedon jatkoprosessointia ja tiedonsiirtoa sovellusten välillä. Tutkittiin, mitä vaihtoehtoja tiedon siirron toteuttamiseksi ohjelmallisesti on ja mihin suuntaan tiedon siirtäminen on menossa tulevaisuudessa. Soveltavassa osassa toteutettiin IFC-standardin mukaisen ISO10303-tiedoston (Osa 21) jäsentäminen ja tulkitseminen XML-muotoon. Sovelluksessa jäsennetään ja tulkitaan CAD-ohjelmistolla tehty IFC-tiedosto C# -ohjelmointikielellä ja tallennetaan tieto XML-tietokantaan kustannuslaskentaohjelmiston luettavaksi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Open educational resources (OER) promise increased access, participation, quality, and relevance, in addition to cost reduction. These seemingly fantastic promises are based on the supposition that educators and learners will discover existing resources, improve them, and share the results, resulting in a virtuous cycle of improvement and re-use. By anecdotal metrics, existing web scale search is not working for OER. This situation impairs the cycle underlying the promise of OER, endangering long term growth and sustainability. While the scope of the problem is vast, targeted improvements in areas of curation, indexing, and data exchange can improve the situation, and create opportunities for further scale. I explore the way the system is currently inadequate, discuss areas for targeted improvement, and describe a prototype system built to test these ideas. I conclude with suggestions for further exploration and development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peer-reviewed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Amb aquest treball de final de carrera de la titulació d'enginyeria tècnica en informàtica de gestió es pretén fer una primera aproximació al món de l'anàlisi semàntic de webs. Consisteix, per una banda, en la creació d'una ontologia per emmagatzemar informació provinent de la web de LinkedIn de manera que després pugui ser analitzada i permeti filtrar les dades de manera pràctica evitant l'excés d'informació no útil. Per altra banda, el treball inclou el desenvolupament d'una aplicació per a l'obtenció de la informació de la web de LinkedIn de manera automàtica, i un mètode per a la importació a l'ontologia creada.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El cas que ens ocupa és la xarxa social LinkedIn. La quantitat d'ofertes i demandes de feina fan a vegades difícil extreure'n la informació necessària. Es vol crear una ontologia per tal de nodrir les dades presents en la web LinkedIn amb informació semàntica. Mitjançant el programari Protégé es pretén crear aquesta ontologia i, mitjançant les API que ofereix LinkedIn, un mitjà per inferir informació respecte a les ofertes i demandes de feina.