980 resultados para Meta Data, Semantic Web, Software Maintenance, Software Metrics
Resumo:
Supply chains comprise of complex processes spanning across multiple trading partners. The various operations involved generate large number of events that need to be integrated in order to enable internal and external traceability. Further, provenance of artifacts and agents involved in the supply chain operations is now a key traceability requirement. In this paper we propose a Semantic web/Linked data powered framework for the event based representation and analysis of supply chain activities governed by the EPCIS specification. We specifically show how a new EPCIS event type called "Transformation Event" can be semantically annotated using EEM - The EPCIS Event Model to generate linked data, that can be exploited for internal event based traceability in supply chains involving transformation of products. For integrating provenance with traceability, we propose a mapping from EEM to PROV-O. We exemplify our approach on an abstraction of the production processes that are part of the wine supply chain.
Resumo:
This study explored the strategies that community-based, consumer-focused advocacy, alternative service organizations (ASOs), implemented to adapt to the changes in the nonprofit funding environment (Oliver & McShane, 1979; Perlmutter, 1988a, 1994). It is not clear as to the extent to which current funding trends have influenced ASOs as little empirical research has been conducted in this area (Magnus, 2001; Marquez, 2003; Powell, 1986). ^ This study used a qualitative research design to investigate strategies implemented by these organizations to adapt to changes such as decreasing government, foundation, and corporate funding and an increasing number of nonprofit organizations. More than 20 community informants helped to identify, locate, and provide information about ASOs. Semi-structured interviews were conducted with a sample of 30 ASO executive directors from diverse organizations in Miami-bade and Broward Counties, in South Florida. ^ Data analysis was facilitated by the use of ATLAS.ti, version 5, a qualitative data analysis computer software program designed for grounded theory research. This process generated five major themes: Funding Environment; Internal Structure; Strategies for Survival; Sustainability; and Committing to the Cause, Mission, and Vision. ^ The results indicate that ASOs are struggling to survive financially by cutting programs, decreasing staff, and limiting service to consumers. They are also exploring ways to develop fundraising strategies; for example, increasing the number of proposals written for grants, focusing on fund development, and establishing for-profit ventures. Even organizations that state that they are currently financially stable are concerned about their financial vulnerability. There is little flexibility or cushioning to adjust to "funding jolts." The fear of losing current funding levels and being placed in a tenuous financial situation is a constant concern for these ASOs. ^ Further data collected from the self-administered Funding Checklist and demographic forms were coded and analyzed using Statistical Package for the Social Sciences (SPSS). Descriptive information and frequencies generated findings regarding the revenue, staff compliment, use of volunteers and fundraising consultants, and fundraising practices. The study proposes a model of funding relationships and presents implications for social work practice, and policy, along with recommendations for future research. ^
Resumo:
This thesis presents a certification method for semantic web services compositions which aims to statically ensure its functional correctness. Certification method encompasses two dimensions of verification, termed base and functional dimensions. Base dimension concerns with the verification of application correctness of the semantic web service in the composition, i.e., to ensure that each service invocation given in the composition comply with its respective service definition. The certification of this dimension exploits the semantic compatibility between the invocation arguments and formal parameters of the semantic web service. Functional dimension aims to ensure that the composition satisfies a given specification expressed in the form of preconditions and postconditions. This dimension is formalized by a Hoare logic based calculus. Partial correctness specifications involving compositions of semantic web services can be derived from the deductive system proposed. Our work is also characterized by exploiting the use of a fragment of description logic, i.e., ALC, to express the partial correctness specifications. In order to operationalize the proposed certification method, we developed a supporting environment for defining the semantic web services compositions as well as to conduct the certification process. The certification method were experimentally evaluated by applying it in three different proof concepts. These proof concepts enabled to broadly evaluate the method certification
Resumo:
Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.
Resumo:
Questo lavoro di tesi si concentra sulle estensioni apportate a BEX (Bibliographic Explorer), una web app finalizzata alla navigazione di pubblicazioni scientifiche attraverso le loro citazioni. Il settore in cui si colloca è il Semantic Publishing, un nuovo ambito di ricerca derivato dall'applicazione delle tecnologie del Semantic Web allo Scholarly Publishing, che ha come scopo la pubblicazione di articoli accademici a cui vengono associati metadati semantici. BEX nasce all'interno del Semantic Lancet Project del Dipartimento di Informatica dell'Università di Bologna, il cui obiettivo è costruire un Linked Open Dataset di pubblicazioni accademiche, il Semantic Lancet Triplestore (SLT), e fornire strumenti per la navigazione ad alto livello e l'uso approfondito dei dati in esso contenuti. Gli scholarly Linked Open Data elaborati da BEX sono insiemi di triple RDF conformi alle ontologie SPAR. Originariamente BEX ha come backend il dataset SLT che contiene metadati relativi alle pubblicazioni del Journal Of Web Semantics di Elsevier. BEX offre viste avanzate tramite un'interfaccia interattiva e una buona user-experience. L'utente di BEX è principalmente il ricercatore universitario, che per compiere le sue attività quotidiane fa largo uso delle Digital Library (DL) e dei servizi che esse offrono. Dato il fermento dei ricercatori nel campo del Semantic Publishing e la veloce diffusione della pubblicazione di scholarly Linked Open Data è ragionevole pensare di ampliare e mantenere un progetto che possa provvedere al sense making di dati altrimenti interrogabili solo in modo diretto con queries SPARQL. Le principali integrazioni a BEX sono state fatte in termini di scalabilità e flessibilità: si è implementata la paginazione dei risultati di ricerca, l'indipendenza da SLT per poter gestire datasets diversi per struttura e volume, e la creazione di viste author centric tramite aggregazione di dati e comparazione tra autori.
Resumo:
La tesi descrive PARLEN, uno strumento che permette l'analisi di articoli, l'estrazione e il riconoscimento delle entità - ad esempio persone, istituzioni, città - e il collegamento delle stesse a risorse online. PARLEN è inoltre in grado di pubblicare i dati estratti in un dataset basato su principi e tecnologie del Semantic Web.
Resumo:
Postprint
Resumo:
Postprint
Resumo:
Describes and analyzes the results obtained after analysis of the publications present in Scopus data base and used that tool rankings generated by the research group Scimago on the production of the different countries of Central America on the issue of documentation the means of mass communication. Performed a comparative about different countries in the region and the scientific analyzes. Finally, and given and data analysis, a number of recommendations are made to improve the production and the presence in indexed database.
Resumo:
One of the leading motivations behind the multilingual semantic web is to make resources accessible digitally in an online global multilingual context. Consequently, it is fundamental for knowledge bases to find a way to manage multilingualism and thus be equipped with those procedures for its conceptual modelling. In this context, the goal of this paper is to discuss how common-sense knowledge and cultural knowledge are modelled in a multilingual framework. More particularly, multilingualism and conceptual modelling are dealt with from the perspective of FunGramKB, a lexico-conceptual knowledge base for natural language understanding. This project argues for a clear division between the lexical and the conceptual dimensions of knowledge. Moreover, the conceptual layer is organized into three modules, which result from a strong commitment towards capturing semantic knowledge (Ontology), procedural knowledge (Cognicon) and episodic knowledge (Onomasticon). Cultural mismatches are discussed and formally represented at the three conceptual levels of FunGramKB.
Resumo:
Abstract The World Wide Web Consortium, W3C, is known for standards like HTML and CSS but there's a lot more to it than that. Mobile, automotive, publishing, graphics, TV and more. Then there are horizontal issues like privacy, security, accessibility and internationalisation. Many of these assume that there is an underlying data infrastructure to power applications. In this session, W3C's Data Activity Lead, Phil Archer, will describe the overall vision for better use of the Web as a platform for sharing data and how that translates into recent, current and possible future work. What's the difference between using the Web as a data platform and as a glorified USB stick? Why does it matter? And what makes a standard a standard anyway? Speaker Biography Phil Archer Phil Archer is Data Activity Lead at W3C, the industry standards body for the World Wide Web, coordinating W3C's work in the Semantic Web and related technologies. He is most closely involved in the Data on the Web Best Practices, Permissions and Obligations Expression and Spatial Data on the Web Working Groups. His key themes are interoperability through common terminology and URI persistence. As well as work at the W3C, his career has encompassed broadcasting, teaching, linked data publishing, copy writing, and, perhaps incongruously, countryside conservation. The common thread throughout has been a knack for communication, particularly communicating complex technical ideas to a more general audience.
Resumo:
The continuous flow of technological developments in communications and electronic industries has led to the growing expansion of the Internet of Things (IoT). By leveraging the capabilities of smart networked devices and integrating them into existing industrial, leisure and communication applications, the IoT is expected to positively impact both economy and society, reducing the gap between the physical and digital worlds. Therefore, several efforts have been dedicated to the development of networking solutions addressing the diversity of challenges associated with such a vision. In this context, the integration of Information Centric Networking (ICN) concepts into the core of IoT is a research area gaining momentum and involving both research and industry actors. The massive amount of heterogeneous devices, as well as the data they produce, is a significant challenge for a wide-scale adoption of the IoT. In this paper we propose a service discovery mechanism, based on Named Data Networking (NDN), that leverages the use of a semantic matching mechanism for achieving a flexible discovery process. The development of appropriate service discovery mechanisms enriched with semantic capabilities for understanding and processing context information is a key feature for turning raw data into useful knowledge and ensuring the interoperability among different devices and applications. We assessed the performance of our solution through the implementation and deployment of a proof-of-concept prototype. Obtained results illustrate the potential of integrating semantic and ICN mechanisms to enable a flexible service discovery in IoT scenarios.
Resumo:
Los metadatos son llaves para la categorización de información en los servicios digitales. En esencia se trata de catalogación y clasificación de información y su uso constituye una de las mejores prácticas en la gestión de información y de la misma manera que los catálogos y OPAC’s impacta en mejores servicios a los usuarios sean éstos de bibiotecas virtuales, e-gobierno, e-aprendizaje o e-salud; asimismo son la base para futuros desarrollos como la Web Semántica.El tema es de particular interés a los bibliotecarios ya que como organizadores del conocimiento conocen los esquemas de clasificación, reglas de registro de datos como las AACR2 y vocabularios especializados. En este documento se manejan algunos conceptos básicos al respecto y se comentan los pasos que América Latina está dando en este tema global.
Resumo:
Background: The -819C/T polymorphism in interleukin 10 (IL-10) gene has been reported to be associated with inflammatory bowel disease (IBD) ,but the previous results are conflicting. Materials and Methods: The present study aimed at investigating the association between this polymorphism and risk of IBD using a meta-analysis.PubMed,Web of Science,EMBASE,google scholar and China National Knowledge Infrastructure (CNKI) databases were systematically searched to identify relevant publications from their inception to April 2016.Pooled odds ratio (OR) with 95% confidence interval (CI) was calculated using fixed- or random-effects models. Results: A total of 7 case-control studies containing 1890 patients and 2929 controls were enrolled into this meta-analysis, and our results showed no association between IL-10 gene -819C/T polymorphism and IBD risk(TT vs. CC:OR=0.81,95%CI 0.64- 1.04;CT vs. CC:OR=0.92,95%CI 0.81-1.05; Dominant model: OR=0.90,95%CI 0.80-1.02; Recessive model: OR=0.84,95%CI 0.66-1.06). In a subgroup analysis by nationality, the -819C/T polymorphism was not associated with IBD in both Asians and Caucasians. In the subgroup analysis stratified by IBD type, significant association was found in Crohn’s disease(CD)(CT vs. CC:OR=0.68,95%CI 0.48-0.97). Conclusion: In summary, the present meta-analysis suggests that the IL-10 gene -819C/T polymorphism may be associated with CD risk.