33 resultados para SIB Semantic Information Broker OSGI Semantic Web


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semantic Web technologies provide the means to express the knowledge in a formal and standardized manner, enabling machines to automatically derive meaning from the data. Often this knowledge is uncertain or different degrees of certainty may be assigned to the same statements. This is the case in many fields of study such as in Digital Humanities, Science and Arts. The challenge relies on the fact that our knowledge about the surrounding world is dynamic and may evolve based on new data coming from the latest discoveries. Furthermore we should be able to express conflicting, debated or disputed statements in an efficient, effective and consistent way without the need of asserting them. We call this approach 'Expressing Without Asserting' (EWA). In this work we identify all existing methods that are compatible with actual Semantic Web standards and enable us to express EWA. In our research we were able to prove that existing reification methods such as Named Graphs, Singleton Properties, Wikidata Statements and RDF-Star are the most suitable methods to represent in a reliable way EWA. Next we compare these methods with our own method, namely Conjectures from a quantitative perspective. Our main objective was to put Conjectures into stress tests leveraging enormous datasets created ad hoc using art-related Wikidata dumps and measure the performance in various triplestores in relation with similar concurrent methods. Our experiments show that Conjectures are a formidable tool to express efficiently and effectively EWA. In some cases, Conjectures outperform state of the art methods such as singleton and Rdf-Star exposing their great potential. Is our firm belief that Conjectures represent a suitable solution to EWA issues. Conjectures in their weak form are fully compatible with Semantic Web standards, especially with RDF and SPARQL. Furthermore Conjectures benefit from comprehensive syntax and intuitive semantics that make them easy to learn and adapt.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Building Information Modelling is changing the design and construction field ever since it entered the market. It took just some time to show its capabilities, it takes some time to be mastered before it could be used expressing all its best features. Since it was conceived to be adopted from the earliest stage of design to get the maximum from the decisional project, it still struggles to adapt to existing buildings. In fact, there is a branch of this methodology that is dedicated to what has been already made that is called Historic BIM or HBIM. This study aims to make clear what are BIM and HBIM, both from a theoretical point of view and in practice, applying from scratch the state of the art to a case study. It had been chosen the fortress of San Felice sul Panaro, a marvellous building with a thousand years of history in its bricks, that suffered violent earthquakes, but it is still standing. By means of this example, it will be shown which are the limits that could be encountered when applying BIM methodology to existing heritage, moreover will be pointed out all the new features that a simple 2D design could not achieve.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Most of the existing open-source search engines, utilize keyword or tf-idf based techniques to find relevant documents and web pages relative to an input query. Although these methods, with the help of a page rank or knowledge graphs, proved to be effective in some cases, they often fail to retrieve relevant instances for more complicated queries that would require a semantic understanding to be exploited. In this Thesis, a self-supervised information retrieval system based on transformers is employed to build a semantic search engine over the library of Gruppo Maggioli company. Semantic search or search with meaning can refer to an understanding of the query, instead of simply finding words matches and, in general, it represents knowledge in a way suitable for retrieval. We chose to investigate a new self-supervised strategy to handle the training of unlabeled data based on the creation of pairs of ’artificial’ queries and the respective positive passages. We claim that by removing the reliance on labeled data, we may use the large volume of unlabeled material on the web without being limited to languages or domains where labeled data is abundant.