980 resultados para Meta Data, Semantic Web, Software Maintenance, Software Metrics
Validity of alcohol screening instruments in general population gender studies: an analytical review
Resumo:
The present study is an analytical review of the methodology used in studies of efficacy of screening instruments to detect harmful use/ alcohol dependence according to the gender in population surveys. Systematic review of bibliography was done, using data from Web of Science, Pubmed and PsycInfo. Population studies were included without date range, in English, Spanish or Portuguese languages, with sample of adults, evaluating psychometric characteristics of any alcohol screening instrument, whereas studies in special population or under treatment as well as prevalence of alcohol consumption were excluded. Thirteen studies were selected to be included in the present review. According to the studies, the instruments that presented a better performance among men were AUDIT and its derivatives (6 studies) and CAGE (2 studies), whereas among women, AUDIT and its derivatives (7 studies), followed by CAGE (3 studies). The increase of consumption and problems related to alcohol use and its implications for public health indicate the need and urgency for adequacy of screening instruments to differences of gender in general population. The population surveys in the area are scarce. Furthermore, the found studies present heterogeneous methodology which makes accurate comparisons difficult.
Resumo:
Electronic business surely represents the new development perspective for world-wide trade. Together with the idea of ebusiness, and the exigency to exchange business messages between trading partners, the concept of business-to-business (B2B) integration arouse. B2B integration is becoming necessary to allow partners to communicate and exchange business documents, like catalogues, purchase orders, reports and invoices, overcoming architectural, applicative, and semantic differences, according to the business processes implemented by each enterprise. Business relationships can be very heterogeneous, and consequently there are variousways to integrate enterprises with each other. Moreover nowadays not only large enterprises, but also the small- and medium- enterprises are moving towards ebusiness: more than two-thirds of Small and Medium Enterprises (SMEs) use the Internet as a business tool. One of the business areas which is actively facing the interoperability problem is that related with the supply chain management. In order to really allow the SMEs to improve their business and to fully exploit ICT technologies in their business transactions, there are three main players that must be considered and joined: the new emerging ICT technologies, the scenario and the requirements of the enterprises and the world of standards and standardisation bodies. This thesis presents the definition and the development of an interoperability framework (and the bounded standardisation intiatives) to provide the Textile/Clothing sectorwith a shared set of business documents and protocols for electronic transactions. Considering also some limitations, the thesis proposes a ontology-based approach to improve the functionalities of the developed framework and, exploiting the technologies of the semantic web, to improve the standardisation life-cycle, intended as the development, dissemination and adoption of B2B protocols for specific business domain. The use of ontologies allows the semantic modellisation of knowledge domains, upon which it is possible to develop a set of components for a better management of B2B protocols, and to ease their comprehension and adoption for the target users.
Resumo:
Obiettivo di questo lavoro di tesi è il perfezionamento di un sistema di Health Smart Home, ovvero un ambiente fisico (ad esempio un'abitazione) che incorpora una rete di comunicazione in grado di connettere apparecchi elettronici e servizi controllabili da remoto, con l'obiettivo di facilitare la vita ad anziani, malati o disabili nelle loro case. Questo lavoro di tesi mostrerà come è stato possibile realizzare tale sistema partendo dalle teorie e dalle tecnologie sviluppate per il Web Semantico, al fine di trasformare l'ambiente fisico in un Cyber Physical (Eco)System perfettamente funzionante.
Resumo:
Introduzione a tecniche di web semantico e realizzazione di un approccio in grado di ricreare un ambiente familiare di un qualsiasi motore di ricerca con funzionalità semantico-lessicali e possibilità di estrazione, in base ai risultati di ricerca, dei concetti e termini chiave che costituiranno i relativi gruppi di raccolta per i vari documenti con argomenti in comune.
Resumo:
The goal of the present research is to define a Semantic Web framework for precedent modelling, by using knowledge extracted from text, metadata, and rules, while maintaining a strong text-to-knowledge morphism between legal text and legal concepts, in order to fill the gap between legal document and its semantics. The framework is composed of four different models that make use of standard languages from the Semantic Web stack of technologies: a document metadata structure, modelling the main parts of a judgement, and creating a bridge between a text and its semantic annotations of legal concepts; a legal core ontology, modelling abstract legal concepts and institutions contained in a rule of law; a legal domain ontology, modelling the main legal concepts in a specific domain concerned by case-law; an argumentation system, modelling the structure of argumentation. The input to the framework includes metadata associated with judicial concepts, and an ontology library representing the structure of case-law. The research relies on the previous efforts of the community in the field of legal knowledge representation and rule interchange for applications in the legal domain, in order to apply the theory to a set of real legal documents, stressing the OWL axioms definitions as much as possible in order to enable them to provide a semantically powerful representation of the legal document and a solid ground for an argumentation system using a defeasible subset of predicate logics. It appears that some new features of OWL2 unlock useful reasoning features for legal knowledge, especially if combined with defeasible rules and argumentation schemes. The main task is thus to formalize legal concepts and argumentation patterns contained in a judgement, with the following requirement: to check, validate and reuse the discourse of a judge - and the argumentation he produces - as expressed by the judicial text.
Resumo:
Un'applicazione web user-friendly di supporto ai ricercatori per l'esecuzione efficiente di specifici tasks di ricerca e analisi di articoli scientifici
Resumo:
Dalla necessità di risolvere il problema della disambiguazione di un insieme di autori messo a disposizione dall'Università di Bologna, il Semantic Lancet, è nata l'idea di progettare un algoritmo di disambiguazione in grado di adattarsi, in caso di bisogno, a qualsiasi tipo di lista di autori. Per la fase di testing dell'algoritmo è stato utilizzato un dataset generato (11724 autori di cui 1295 coppie da disambiguare) dalle informazioni disponibili dal "database systems and logic programming" (DBLP), in modo da essere il più etereogeneo possibile, cioè da contenere il maggior numero di casi di disambiguazione possibile. Per i primi test di sbarramento è stato definito un algoritmo alternativo discusso nella sezione 4.3 ottenendo una misura di esattezza dell'1% ed una di completezza dell'81%. L'algoritmo proposto impostato con il modello di configurazione ha ottenuto invece una misura di esattezza dell'81% ed una di completezza del 70%, test discusso nella sezione 4.4. Successivamente l'algoritmo è stato testato anche su un altro dataset: Semantic Lancet (919 autori di cui 34 coppie da disambiguare), ottenendo, grazie alle dovute variazioni del file di configurazione, una misura di esattezza del 84% e una di completezza del 79%, discusso nella sezione 4.5.
Resumo:
La Federazione è un concetto molto utilizzato ed implementato in vari ambiti dell’informatica. In particolare sta avendo grande interesse nel Semantic Web, e risulta essere significativo e importante il suo utilizzo in una disciplina in grande evoluzione come l’Enterprise Architecture. L’obiettivo di questa tesi è stato implementare il concetto di Federazione di Endpoint SPARQL, dove l’elemento centrale è stata la condivisione del modello dei dati tra i vari membri, il quale rappresenta il patto della federazione. Successivamente sono stati messi in luce i benefici che questo tipo di soluzione apporta alla disciplina dell’Enterprise Architecture, in particolar modo nell’ambito dell’analisi dei dati. In relazione a quest’ultimo aspetto, il Semantic Web offre un linguaggio flessibile e facilmente evolvibile per rappresentare l’azienda e i suoi dati, oltre che ad un protocollo standard per la loro interrogazione, ovvero lo SPARQL. La federazione, invece, apporta dei miglioramenti rendendo le fonti dato omogenee dal punto di vista del modello, utilizza un unico protocollo per l’accesso ad essi (SPARQL), ed elimina le criticità in relazione alla normalizzazione dei dati nei processi di analisi. Questi due aspetti risultano abilitanti proprio per l’Enterprise Architecture. Infine sono state definite due possibili evoluzioni, in particolare un costrutto che permetta l’implementazione e la gestione della federazione a livello di linguaggio SPARQL, ed una ontologia standard e condivisibile tramite la quale gestire la federazione in modo trasparente.
Resumo:
Questa tesi riguarda lo sviluppo di un'applicazione che sfrutta le tecnologie del Web Semantico e del Text Mining. L'applicazione rappresenta l'estensione di un lavoro relativo ad una tesi precedente, aggiungendo ad esso la funzionalità di ricerca semantica. Tale funzionalità permette il recupero di informazioni che con il metodo di ricerca normale non verrebbero considerate. Per raggiungere questo risultato si utilizza WordNet, un database semantico-lessicale, e una libreria per la Latent Semantic Analysis, una tecnica del Text Mining.
Resumo:
Linking the physical world to the Internet, also known as the Internet of Things, has increased available information and services in everyday life and in the Enterprise world. In Enterprise IT an increasing number of communication is done between IT backend systems and small IoT devices, for example sensor networks or RFID readers. This introduces some challenges in terms of complexity and integration. We are working on the integration of IoT devices into Enterprise IT by leveraging SOA techniques and Semantic Web technologies. We present a SOA based integration platform for connecting WSNs and large enterprise business processes. For ensuring interoperability our platform is based on Linked Services. These are thoroughly described, machine-readable, machine-reasonable service descriptions.
Resumo:
Online reputation management deals with monitoring and influencing the online record of a person, an organization or a product. The Social Web offers increasingly simple ways to publish and disseminate personal or opinionated information, which can rapidly have a disastrous influence on the online reputation of some of the entities. The author focuses on the Social Web and possibilities of its integration with the Semantic Web as resource for a semi-automated tracking of online reputations using imprecise natural language terms. The inherent structure of natural language supports humans not only in communication but also in the perception of the world. Thereby fuzziness is a promising tool for transforming those human perceptions into computer artifacts. Through fuzzy grassroots ontologies, the Social Semantic Web becomes more naturally and thus can streamline online reputation management. For readers interested in the cross-over field of computer science, information systems, and social sciences, this book is an ideal source for becoming acquainted with the evolving field of fuzzy online reputation management in the Social Semantic Web area.
Resumo:
Changes in glaciers and ice caps provide some of the clearest evidence of climate change, and as such they constitute key variables for early detection strategies in global climate-related observations. These changes have impacts on global sea level fluctuations, the regional to local natural hazard situation, as well as on societies dependent on glacier meltwater. Internationally coordinated collection and publication of standardised information about ongoing glacier changes was initiated back in 1894. The compiled data sets on the global distribution and changes in glaciers and ice caps provide the backbone of the numerous scientific publications on the latest findings about surface ice on land. Since the very beginning, the compiled data has been published by the World Glacier Monitoring Service and its predecessor organisations. However, the corresponding data tables, formats and meta-data are mainly of use to specialists.
Resumo:
In low-accumulation regions, the reliability of d18O-derived temperature signals from ice cores within the Holocene is unclear, primarily due to the small climate changes relative to the intrinsic noise of the isotopic signal. In order to learn about the representativity of single ice cores and to optimise future ice-core-based climate reconstructions, we studied the stable-water isotope composition of firn at Kohnen station, Dronning Maud Land, Antarctica. Analysing d18O in two 50 m long snow trenches allowed us to create an unprecedented, two-dimensional image characterising the isotopic variations from the centimetre to the hundred-metre scale. This data set includes the complete trench oxygen isotope record together with the meta data used in the study.
Resumo:
DynaLearn (http://www.DynaLearn.eu) develops a cognitive artefact that engages learners in an active learning by modelling process to develop conceptual system knowledge. Learners create external representations using diagrams. The diagrams capture conceptual knowledge using the Garp3 Qualitative Reasoning (QR) formalism [2]. The expressions can be simulated, confronting learners with the logical consequences thereof. To further aid learners, DynaLearn employs a sequence of knowledge representations (Learning Spaces, LS), with increasing complexity in terms of the modelling ingredients a learner can use [1]. An online repository contains QR models created by experts/teachers and learners. The server runs semantic services [4] to generate feedback at the request of learners via the workbench. The feedback is communicated to the learner via a set of virtual characters, each having its own competence [3]. A specific feedback thus incorporates three aspects: content, character appearance, and a didactic setting (e.g. Quiz mode). In the interactive event we will demonstrate the latest achievements of the DynaLearn project. First, the 6 learning spaces for learners to work with. Second, the generation of feedback relevant to the individual needs of a learner using Semantic Web technology. Third, the verbalization of the feedback via different animated virtual characters, notably: Basic help, Critic, Recommender, Quizmaster & Teachable agen