41 resultados para Web data
Resumo:
Existing semantic search tools have been primarily designed to enhance the performance of traditional search technologies but with little support for ordinary end users who are not necessarily familiar with domain specific semantic data, ontologies, or SQL-like query languages. This paper presents SemSearch, a search engine, which pays special attention to this issue by providing several means to hide the complexity of semantic search from end users and thus make it easy to use and effective.
Resumo:
Disasters cause widespread harm and disrupt the normal functioning of society, and effective management requires the participation and cooperation of many actors. While advances in information and networking technology have made transmission of data easier than it ever has been before, communication and coordination of activities between actors remain exceptionally difficult. This paper employs semantic web technology and Linked Data principles to create a network of intercommunicating and inter-dependent on-line sites for managing resources. Each site publishes available resources openly and a lightweight opendata protocol is used to request and respond to requests for resources between sites in the network.
Resumo:
This paper describes an online survey that was conducted to explore typical Internet users' awareness and knowledge of specific technologies that relate to their security and privacy when using a Web browser to access the Internet. The survey was conducted using an anonymous, online questionnaire. Over a four month period, 237 individuals completed the questionnaire. Respondents were predominately Canadian, with substantial numbers from the United Kingdom and the United States. Important findings include evidence that users have tried to educate themselves regarding their online security and privacy, but with limited success; different interpretations of the term "secure Web site" can lead to very different levels of trust in a site; respondents strongly expressed their skepticism about privacy policies, but nevertheless believe that sites can be trusted to respect their stated policies; and users may confuse browser cookies with other types of data stored locally by browsers, leading to inappropriate conclusions about the risks they present.
Resumo:
The Protein pKa Database (PPD) v1.0 provides a compendium of protein residue-specific ionization equilibria (pKa values), as collated from the primary literature, in the form of a web-accessible postgreSQL relational database. Ionizable residues play key roles in the molecular mechanisms that underlie many biological phenomena, including protein folding and enzyme catalysis. The PPD serves as a general protein pKa archive and as a source of data that allows for the development and improvement of pKa prediction systems. The database is accessed through an HTML interface, which offers two fast, efficient search methods: an amino acid-based query and a Basic Local Alignment Search Tool search. Entries also give details of experimental techniques and links to other key databases, such as National Center for Biotechnology Information and the Protein Data Bank, providing the user with considerable background information.
Resumo:
The performance of a supply chain depends critically on the coordinating actions and decisions undertaken by the trading partners. The sharing of product and process information plays a central role in the coordination and is a key driver for the success of the supply chain. In this paper we propose the concept of "Linked pedigrees" - linked datasets, that enable the sharing of traceability information of products as they move along the supply chain. We present a distributed and decentralised, linked data driven architecture that consumes real time supply chain linked data to generate linked pedigrees. We then present a communication protocol to enable the exchange of linked pedigrees among trading partners. We exemplify the utility of linked pedigrees by illustrating examples from the perishable goods logistics supply chain.
Resumo:
The evaluation of geospatial data quality and trustworthiness presents a major challenge to geospatial data users when making a dataset selection decision. The research presented here therefore focused on defining and developing a GEO label – a decision support mechanism to assist data users in efficient and effective geospatial dataset selection on the basis of quality, trustworthiness and fitness for use. This thesis thus presents six phases of research and development conducted to: (a) identify the informational aspects upon which users rely when assessing geospatial dataset quality and trustworthiness; (2) elicit initial user views on the GEO label role in supporting dataset comparison and selection; (3) evaluate prototype label visualisations; (4) develop a Web service to support GEO label generation; (5) develop a prototype GEO label-based dataset discovery and intercomparison decision support tool; and (6) evaluate the prototype tool in a controlled human-subject study. The results of the studies revealed, and subsequently confirmed, eight geospatial data informational aspects that were considered important by users when evaluating geospatial dataset quality and trustworthiness, namely: producer information, producer comments, lineage information, compliance with standards, quantitative quality information, user feedback, expert reviews, and citations information. Following an iterative user-centred design (UCD) approach, it was established that the GEO label should visually summarise availability and allow interrogation of these key informational aspects. A Web service was developed to support generation of dynamic GEO label representations and integrated into a number of real-world GIS applications. The service was also utilised in the development of the GEO LINC tool – a GEO label-based dataset discovery and intercomparison decision support tool. The results of the final evaluation study indicated that (a) the GEO label effectively communicates the availability of dataset quality and trustworthiness information and (b) GEO LINC successfully facilitates ‘at a glance’ dataset intercomparison and fitness for purpose-based dataset selection.
Resumo:
The manufacturing industry faces many challenges such as reducing time-to-market and cutting costs. In order to meet these increasing demands, effective methods are need to support the early product development stages by bridging the gap of communicating early design ideas and the evaluation of manufacturing performance. This paper introduces methods of linking design and manufacturing domains using disparate technologies. The combined technologies include knowledge management supporting for product lifecycle management systems, Enterprise Resource Planning (ERP) systems, aggregate process planning systems, workflow management and data exchange formats. A case study has been used to demonstrate the use of these technologies, illustrated by adding manufacturing knowledge to generate alternative early process plan which are in turn used by an ERP system to obtain and optimise a rough-cut capacity plan. Copyright © 2010 Inderscience Enterprises Ltd.
Resumo:
This paper provides a summary of the Social Media and Linked Data for Emergency Response (SMILE) workshop, co-located with the Extended Semantic Web Conference, at Montpellier, France, 2013. Following paper presentations and question answering sessions, an extensive discussion and roadmapping session was organised which involved the workshop chairs and attendees. Three main topics guided the discussion - challenges, opportunities and showstoppers. In this paper, we present our roadmap towards effectively exploiting social media and semantic web techniques for emergency response and crisis management.
Resumo:
This paper looks at the issue of privacy and anonymity through the prism of Scott's concept of legibility i.e. the desire of the state to obtain an ever more accurate mapping of its domain and the actors in its domain. We argue that privacy was absent in village life in the past, and it has arisen as a temporary phenomenon arising from the lack of appropriate technology to make all life in the city legible. Cities have been the loci of creativity for the major part of human civilisation. There is something specific about the illegibility of cities which facilitates creativity and innovation. By providing the technology to catalogue and classify all objects and ideas around us, this leads to a consideration of semantic web technologies, Linked Data and the Internet of Things as unwittingly furthering this ever greater legibility. There is a danger that the over description of a domain will lead to a loss in creativity and innovation. We conclude by arguing that our prime concern must be to preserve illegibility because the survival of some form, any form, of civilisation depends upon it.
Resumo:
eHabitat is a Web Processing Service (WPS) designed to compute the likelihood of finding ecosystems with equal properties. Inputs to the WPS, typically thematic geospatial "layers", can be discovered using standardised catalogues, and the outputs tailored to specific end user needs. Because these layers can range from geophysical data captured through remote sensing to socio-economical indicators, eHabitat is exposed to a broad range of different types and levels of uncertainties. Potentially chained to other services to perform ecological forecasting, for example, eHabitat would be an additional component further propagating uncertainties from a potentially long chain of model services. This integration of complex resources increases the challenges in dealing with uncertainty. For such a system, as envisaged by initiatives such as the "Model Web" from the Group on Earth Observations, to be used for policy or decision making, users must be provided with information on the quality of the outputs since all system components will be subject to uncertainty. UncertWeb will create the Uncertainty-Enabled Model Web by promoting interoperability between data and models with quantified uncertainty, building on existing open, international standards. It is the objective of this paper to illustrate a few key ideas behind UncertWeb using eHabitat to discuss the main types of uncertainties the WPS has to deal with and to present the benefits of the use of the UncertWeb framework.
Resumo:
UncertWeb is a European research project running from 2010-2013 that will realize the uncertainty enabled model web. The assumption is that data services, in order to be useful, need to provide information about the accuracy or uncertainty of the data in a machine-readable form. Models taking these data as imput should understand this and propagate errors through model computations, and quantify and communicate errors or uncertainties generated by the model approximations. The project will develop technology to realize this and provide demonstration case studies.