847 resultados para Web Semantico semantic open data geoSPARQL
Resumo:
Il software Smart-M3, ereditato dal progetto europeo SOFIA, conclusosi nel 2011, permette di creare una piattaforma d'interoperabilità indipendente dal tipo di dispositivi e dal loro dominio di utilizzo e che miri a fornire un Web Semantico di informazioni condivisibili fra entità software e dispositivi, creando ambienti intelligenti e collegamenti tra il mondo reale e virtuale. Questo è un campo in continua ascesa grazie al progressivo e regolare sviluppo sia della tecnologia, nell'ambito della miniaturizzazione dei dispositivi, che delle potenzialità dei sistemi embedded. Questi sistemi permettono, tramite l'uso sempre maggiore di sensori e attuatori, l'elaborazione delle informazioni provenienti dall’esterno. È evidente, come un software di tale portata, possa avere una molteplicità di applicazioni, alcune delle quali, nell’ambito della Biomedica, può esprimersi nella telemedicina e nei sistemi e-Heath. Per e-Health si intende infatti l’utilizzo di strumenti basati sulle tecnologie dell'informazione e della comunicazione, per sostenere e promuovere la prevenzione, la diagnosi, il trattamento e il monitoraggio delle malattie e la gestione della salute e dello stile di vita. Obiettivo di questa tesi è fornire un set di dati che mirino ad ottimizzare e perfezionare i criteri nella scelta applicativa di tali strutture. Misureremo prestazioni e capacità di svolgere più o meno velocemente, precisamente ed accuratamente, un particolare compito per cui tale software è stato progettato. Ciò si costruisce sull’esecuzione di un benchmark su diverse implementazioni di Smart-M3 ed in particolare sul componente centrale denominato SIB (Semantic Information Broker).
Resumo:
Questa tesi riguarda lo sviluppo di un'applicazione che sfrutta le tecnologie del Web Semantico e del Text Mining. L'applicazione rappresenta l'estensione di un lavoro relativo ad una tesi precedente, aggiungendo ad esso la funzionalità di ricerca semantica. Tale funzionalità permette il recupero di informazioni che con il metodo di ricerca normale non verrebbero considerate. Per raggiungere questo risultato si utilizza WordNet, un database semantico-lessicale, e una libreria per la Latent Semantic Analysis, una tecnica del Text Mining.
Resumo:
Here we present the Transcription Factor Encyclopedia (TFe), a new web-based compendium of mini review articles on transcription factors (TFs) that is founded on the principles of open access and collaboration. Our consortium of over 100 researchers has collectively contributed over 130 mini review articles on pertinent human, mouse and rat TFs. Notable features of the TFe website include a high-quality PDF generator and web API for programmatic data retrieval. TFe aims to rapidly educate scientists about the TFs they encounter through the delivery of succinct summaries written and vetted by experts in the field.
Resumo:
In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.
Resumo:
Estamos viviendo la era de la Internetificación. A día de hoy, las conexiones a Internet se asumen presentes en nuestro entorno como una necesidad más. La Web, se ha convertido en un lugar de generación de contenido por los usuarios. Una información generada, que sobrepasa la idea con la que surgió esta, ya que en la mayoría de casos, su contenido no se ha diseñado más que para ser consumido por humanos, y no por máquinas. Esto supone un cambio de mentalidad en la forma en que diseñamos sistemas capaces de soportar una carga computacional y de almacenamiento que crece sin un fin aparente. Al mismo tiempo, vivimos un momento de crisis de la educación superior: los altos costes de una educación de calidad suponen una amenaza para el mundo académico. Mediante el uso de la tecnología, se puede lograr un incremento de la productividad, y una reducción en dichos costes en un campo, en el que apenas se ha avanzado desde el Renacimiento. En CloudRoom se ha diseñado una plataforma MOOC con una arquitectura ajustada a las últimas convenciones en Cloud Computing, que implica el uso de Servicios REST, bases de datos NoSQL, y que hace uso de las últimas recomendaciones del W3C en materia de desarrollo web y Linked Data. Para su construcción, se ha hecho uso de métodos ágiles de Ingeniería del Software, técnicas de Interacción Persona-Ordenador, y tecnologías de última generación como Neo4j, Redis, Node.js, AngularJS, Bootstrap, HTML5, CSS3 o Amazon Web Services. Se ha realizado un trabajo integral de Ingeniería Informática, combinando prácticamente la totalidad de aquellas áreas de conocimiento fundamentales en Informática. En definitiva se han ideado las bases de un sistema distribuido robusto, mantenible, con características sociales y semánticas, que puede ser ejecutado en múltiples dispositivos, y que es capaz de responder ante millones de usuarios. We are living through an age of Internetification. Nowadays, Internet connections are a utility whose presence one can simply assume. The web has become a place of generation of content by users. The information generated surpasses the notion with which the World Wide Web emerged because, in most cases, this content has been designed to be consumed by humans and not by machines. This fact implies a change of mindset in the way that we design systems; these systems should be able to support a computational and storage capacity that apparently grows endlessly. At the same time, our education system is in a state of crisis: the high costs of high-quality education threaten the academic world. With the use of technology, we could achieve an increase of productivity and quality, and a reduction of these costs in this field, which has remained largely unchanged since the Renaissance. In CloudRoom, a MOOC platform has been designed with an architecture that satisfies the last conventions on Cloud Computing; which involves the use of REST services, NoSQL databases, and uses the last recommendations from W3C in terms of web development and Linked Data. For its building process, agile methods of Software Engineering, Human-Computer Interaction techniques, and state of the art technologies such as Neo4j, Redis, Node.js, AngularJS, Bootstrap, HTML5, CSS3 or Amazon Web Services have been used. Furthermore, a comprehensive Informatics Engineering work has been performed, by combining virtually all of the areas of knowledge in Computer Science. Summarizing, the pillars of a robust, maintainable, and distributed system have been devised; a system with social and semantic capabilities, which runs in multiple devices, and scales to millions of users.
Resumo:
Viene presentato l’approccio Linked Data, che si serve di descrizioni scritte in linguaggio RDF per rendere espliciti ai calcolatori i legami semantici esistenti tra le risorse che popolano il Web. Si descrive quindi il progetto DBpedia, che si propone di riorganizzare le informazioni disponibili su Wikipedia in formato Linked Data, così da renderle più facilmente consultabili dall’utente e da rendere possibile l’esecuzione di query complesse. Si discute quindi della sfida riguardante l’integrazione di contenuti multimediali (immagini, file audio, video…) su DBpedia e si analizzano tre progetti rivolti in tal senso: Multipedia, DBpedia Commons e IMGpedia. Vengono infine sottolineate l’importanza e le potenzialità legate alla creazione di un Web Semantico.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Web interface agent is used with web browsers to assist users in searching and interactions with the WWW. It is used for a variety of purposes, such as web-enabled remote control, web interactive visualization, and e-commerce activities. User may be aware or unaware of its existence. The intelligence of interface agent consists in its capability of learning and decision-making in performing interactive functions on behalf of a user. However, since web is an open system environment, the reasoning mechanism in an agent should be able to adapt changes and make decisions on exceptional situations, and therefore use meta knowledge. This paper proposes a framework of Reflective Web Interface Agent (RWIA) that is to provide causal connections between the application interfaces and the knowledge model of the interface agent. A prototype is also implemented for the purpose of demonstration.
Resumo:
Extensible Business Reporting Language (XBRL) is being adopted by European regulators as a data standard for the exchange of business information. This paper examines the approach of XBRL International (XII) to the meta-data standard's development and diffusion. We theorise the development of XBRL using concepts drawn from a model of successful open source projects. Comparison of the open source model to XBRL enables us to identify a number of interesting similarities and differences. In common with open source projects, the benefits and progress of XBRL have been overstated and 'hyped' by enthusiastic participants. While XBRL is an open data standard in terms of access to the equivalent of its 'source code' we find that the governance structure of the XBRL consortium is significantly different to a model open source approach. The barrier to participation that is created by requiring paid membership and a focus on transacting business at physical conferences and meetings is identified as particularly critical. Decisions about the technical structure of XBRL, the regulator-led pattern of adoption and the organisation of XII are discussed. Finally areas for future research are identified.
Resumo:
Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework called joint sentiment-topic (JST) model based on latent Dirichlet allocation (LDA), which detects sentiment and topic simultaneously from text. A reparameterized version of the JST model called Reverse-JST, obtained by reversing the sequence of sentiment and topic generation in the modeling process, is also studied. Although JST is equivalent to Reverse-JST without a hierarchical prior, extensive experiments show that when sentiment priors are added, JST performs consistently better than Reverse-JST. Besides, unlike supervised approaches to sentiment classification which often fail to produce satisfactory performance when shifting to other domains, the weakly supervised nature of JST makes it highly portable to other domains. This is verified by the experimental results on data sets from five different domains where the JST model even outperforms existing semi-supervised approaches in some of the data sets despite using no labeled documents. Moreover, the topics and topic sentiment detected by JST are indeed coherent and informative. We hypothesize that the JST model can readily meet the demand of large-scale sentiment analysis from the web in an open-ended fashion.
Resumo:
The current INFRAWEBS European research project aims at developing ICT framework enabling software and service providers to generate and establish open and extensible development platforms for Web Service applications. One of the concrete project objectives is developing a full-life-cycle software toolset for creating and maintaining Semantic Web Services (SWSs) supporting specific applications based on Web Service Modelling Ontology (WSMO) framework. According to WSMO, functional and behavioural descriptions of a SWS may be represented by means of complex logical expressions (axioms). The paper describes a specialized userfriendly tool for constructing and editing such axioms – INFRAWEBS Axiom Editor. After discussing the main design principles of the Editor, its functional architecture is briefly presented. The tool is implemented in Eclipse Graphical Environment Framework and Eclipse Rich Client Platform.
Resumo:
While openness is well applied to software development and exploitation (open sources), and successfully applied to new business models (open innovation), fundamental and applied research seems to lag behind. Even after decades of advocacy, in 2011 only 50% of the public-funded research was freely available and accessible (Archambault et al., 2013). The current research workflows, stemming from a pre-internet age, result in loss of opportunity not only for the researchers themselves (cf. extensive literature on topic at Open Access citation project, http://opcit.eprints.org/), but also slows down innovation and application of research results (Houghton & Swan, 2011). Recent studies continue to suggest that lack of awareness among researchers, rather than lack of e-infrastructure and methodology, is a key reason for this loss of opportunity (Graziotin 2014). The session will focus on why Open Science is ideally suited to achieving tenure-relevant researcher impact in a “Publish or Perish” reality. Open Science encapsulates tools and approaches for each step along the research cycle: from Open Notebook Science to Open Data, Open Access, all setting up researchers for capitalising on social media in order to promote and discuss, and establish unexpected collaborations. Incorporating these new approaches into a updated personal research workflow is of strategic beneficial for young researchers, and will prepare them for expected long term funder trends towards greater openness and demand for greater return on investment (ROI) for public funds.
Resumo:
The Semantic Binary Data Model (SBM) is a viable alternative to the now-dominant relational data model. SBM would be especially advantageous for applications dealing with complex interrelated networks of objects provided that a robust efficient implementation can be achieved. This dissertation presents an implementation design method for SBM, algorithms, and their analytical and empirical evaluation. Our method allows building a robust and flexible database engine with a wider applicability range and improved performance. ^ Extensions to SBM are introduced and an implementation of these extensions is proposed that allows the database engine to efficiently support applications with a predefined set of queries. A New Record data structure is proposed. Trade-offs of employing Fact, Record and Bitmap Data structures for storing information in a semantic database are analyzed. ^ A clustering ID distribution algorithm and an efficient algorithm for object ID encoding are proposed. Mapping to an XML data model is analyzed and a new XML-based XSDL language facilitating interoperability of the system is defined. Solutions to issues associated with making the database engine multi-platform are presented. An improvement to the atomic update algorithm suitable for certain scenarios of database recovery is proposed. ^ Specific guidelines are devised for implementing a robust and well-performing database engine based on the extended Semantic Data Model. ^
Resumo:
The Brazilian CAPES Journal Portal aims to provide Information in Science and Technology (IST) for academic users. Thus, it is considered a relevant instrument for post-graduation dynamics and the country´s Science and Technology (S&T) development. Despite its importance, there are still few studies that focus on the policy analysis and efficiency of these resources. This research aims to fill in this gap once it proposes an analysis of the use of the CAPES Journal Portal done on behalf of the master´s and doctoral alumni of the Post Graduate Program in Management (PPGA) at the Federal University of Rio Grande do Norte (UFRN). The operationalization of the research´s main objective was possible through the specific objectives: characterize graduate profile as CAPES Journal Portal users b) identify motivation for the use of CAPES Journal Portal c) detect graduate satisfaction degree in information seeking done at CAPES Journal Portal d) verify graduate satisfaction regarding the use of the CAPES Journal Portal e) verify the use of the information that is obtained by graduates in the development of their academic activities. The research is of descriptive nature employing a mixed methodological strategy in which quantitative approach predominates. Data collection was done through a web survey questionnaire. Quantitative data analysis was made possible through the use of a statistical method. As for qualitative analysis, there was use of the Brenda Dervin´s sense-making approach as well as content analysis in open ended questions. The research samples were composed by 90 graduate students who had defended their dissertation/thesis in the PPGA program at UFRN in the time span of 2010-2013. This represented by 88% of this population. As for user profile, the analysis has made evident that there are no quantitative differences related to gender. There is predominance of male graduates that were aged 26 to 30 years old. As for female graduates, the great majority were 31 o 35 years old. Most graduates had Master´s degree scholarship in order to support their study. It was also seen that the great majority claim to use the Portal during their post graduation studies. The main reasons responsible for non use was: preference for the use of other data bases and lack of knowledge regarding the Portal. It was observed that the most used information resources were theses and dissertations. Data also indicate preference for complete text. Those who have used the Portal also claimed to have used other electronic information fonts in order to fulfill their information needs. The information fonts that were researched outside in the Portal were monographs, dissertations and thesis. Scielo was the most used information font. Results reveal that access and use of the Portal has been done in a regular manner during post graduation studies. But on the other hand, graduates also make use of other electronic information fonts in order to meet their information needs. The study also confirmed the important mission performed by the Portal regarding Brazilian scientific communication production. This was seen even though users have reported the need for improvement in some aspects such as: periodic training in order to promote, encourage and teach more effective use of the portal; investment aiming the expansion of Social Sciences Collection in the Portal as well as the need to implement continuous evaluation process related to user satisfaction in regarding the services provided.