908 resultados para Semantic Web Services


Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose – Interactive information retrieval (IR) involves many human cognitive shifts at different information behaviour levels. Cognitive science defines a cognitive shift or shift in cognitive focus as triggered by the brain's response and change due to some external force. This paper aims to provide an explication of the concept of “cognitive shift” and then report results from a study replicating Spink's study of cognitive shifts during interactive IR. This work aims to generate promising insights into aspects of cognitive shifts during interactive IR and a new IR evaluation measure – information problem shift. Design/methodology/approach – The study participants (n=9) conducted an online search on an in-depth personal medical information problem. Data analysed included the pre- and post-search questionnaires completed by each study participant. Implications for web services and further research are discussed. Findings – Key findings replicated the results in Spink's study, including: all study participants reported some level of cognitive shift in their information problem, information seeking and personal knowledge due to their search interaction; and different study participants reported different levels of cognitive shift. Some study participants reported major cognitive shifts in various user-based variables such as information problem or information-seeking stage. Unlike Spink's study, no participant experienced a negative shift in their information problem stage or level of information problem understanding. Originality/value – This study builds on the previous study by Spink using a different dataset. The paper provides valuable insights for further research into cognitive shifts during interactive IR.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A service-oriented system is composed of independent software units, namely services, that interact with one another exclusively through message exchanges. The proper functioning of such system depends on whether or not each individual service behaves as the other services expect it to behave. Since services may be developed and operated independently, it is unrealistic to assume that this is always the case. This article addresses the problem of checking and quantifying how much the actual behavior of a service, as recorded in message logs, conforms to the expected behavior as specified in a process model.We consider the case where the expected behavior is defined using the BPEL industry standard (Business Process Execution Language for Web Services). BPEL process definitions are translated into Petri nets and Petri net-based conformance checking techniques are applied to derive two complementary indicators of conformance: fitness and appropriateness. The approach has been implemented in a toolset for business process analysis and mining, namely ProM, and has been tested in an environment comprising multiple Oracle BPEL servers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Privacy issues have hindered the evolution of e-health since its emergence. Patients demand better solutions for the protection of private information. Health professionals demand open access to patient health records. Existing e-health systems find it difficult to fulfill these competing requirements. In this paper, we present an information accountability framework (IAF) for e-health systems. The IAF is intended to address privacy issues and their competing concerns related to e-health. Capabilities of the IAF adhere to information accountability principles and e-health requirements. Policy representation and policy reasoning are key capabilities introduced in the IAF. We investigate how these capabilities are feasible using Semantic Web technologies. We discuss with the use of a case scenario, how we can represent the different types of policies in the IAF using the Open Digital Rights Language (ODRL).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The next-generation of service-oriented architecture (SOA) needs to scale for flexible service consumption, beyond organizational and application boundaries, into communities, ecosystems and business networks. In wider and, ultimately, global settings, new capabilities are needed so that business partners can efficiently and reliably enable, adapt and expose services. Those services can then be discovered, ordered, consumed, metered and paid for, through new applications and opportunities, driven by third-parties in the global “village”. This trend is already underway, in different ways, through different early adopter market segments. This paper proposes an architectural strategy for the provisioning and delivery of services in communities, ecosystems and business networks – a Service Delivery Framework (SDF). The SDF is intended to support multiple industries and deployments where a SOA platform is needed for collaborating partners and diverse consumers. Specifically, it is envisaged that the SDF allows providers to publish their services into network directories so that they can be repurposed, traded and consumed, and leveraging network utilities like B2B gateways and cloud hosting. To support these different facets of service delivery, the SDF extends the conventional service provider, service broker and service consumer of the Web Services Architecture to include service gateway, service hoster, service aggregator and service channel maker.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Using complex event rules for capturing dependencies between business processes is an emerging trend in enterprise information systems. In previous work we have identified a set of requirements for event extensions for business process modeling languages. This paper introduces a graphical language for modeling composite events in business processes, namely BEMN, that fulfills all these requirements. These include event conjunction, disjunction and inhibition as well as cardinality of events whose graphical expression can be factored into flow-oriented process modeling and event rule modeling. Formal semantics for the language are provided.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Language-use has proven to be the most complex and complicating of all Internet features, yet people and institutions invest enormously in language and crosslanguage features because they are fundamental to the success of the Internet’s past, present and future. The thesis takes into focus the developments of the latter – features that facilitate and signify linking between or across languages – both in their historical and current contexts. In the theoretical analysis, the conceptual platform of inter-language linking is developed to both accommodate efforts towards a new social complexity model for the co-evolution of languages and language content, as well as to create an open analytical space for language and cross-language related features of the Internet and beyond. The practiced uses of inter-language linking have changed over the last decades. Before and during the first years of the WWW, mechanisms of inter-language linking were at best important elements used to create new institutional or content arrangements, but on a large scale they were just insignificant. This has changed with the emergence of the WWW and its development into a web in which content in different languages co-evolve. The thesis traces the inter-language linking mechanisms that facilitated these dynamic changes by analysing what these linking mechanisms are, how their historical as well as current contexts can be understood and what kinds of cultural-economic innovation they enable and impede. The study discusses this alongside four empirical cases of bilingual or multilingual media use, ranging from television and web services for languages of smaller populations, to large-scale, multiple languages involving web ventures by the British Broadcasting Corporation, the Special Broadcasting Service Australia, Wikipedia and Google. To sum up, the thesis introduces the concepts of ‘inter-language linking’ and the ‘lateral web’ to model the social complexity and co-evolution of languages online. The resulting model reconsiders existing social complexity models in that it is the first that can explain the emergence of large-scale, networked co-evolution of languages and language content facilitated by the Internet and the WWW. Finally, the thesis argues that the Internet enables an open space for language and crosslanguage related features and investigates how far this process is facilitated by (1) amateurs and (2) human-algorithmic interaction cultures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis provides a query model suitable for context sensitive access to a wide range of distributed linked datasets which are available to scientists using the Internet. The model is designed based on scientific research standards which require scientists to provide replicable methods in their publications. Although there are query models available that provide limited replicability, they do not contextualise the process whereby different scientists select dataset locations based on their trust and physical location. In different contexts, scientists need to perform different data cleaning actions, independent of the overall query, and the model was designed to accommodate this function. The query model was implemented as a prototype web application and its features were verified through its use as the engine behind a major scientific data access site, Bio2RDF.org. The prototype showed that it was possible to have context sensitive behaviour for each of the three mirrors of Bio2RDF.org using a single set of configuration settings. The prototype provided executable query provenance that could be attached to scientific publications to fulfil replicability requirements. The model was designed to make it simple to independently interpret and execute the query provenance documents using context specific profiles, without modifying the original provenance documents. Experiments using the prototype as the data access tool in workflow management systems confirmed that the design of the model made it possible to replicate results in different contexts with minimal additions, and no deletions, to query provenance documents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This chapter deals with technical aspects of how USDL service descriptions can be read from and written to different representations for use by humans and tools. A combination of techniques for representing and exchanging USDL have been drawn from Model-Driven Engineering and Semantic Web technologies. The USDL language's structural definition is specified as a MOF meta-model, but some modules were originally defined using the OWL language from the Semantic Web community and translated to the meta-model format. We begin with the important topic of serializing USDL descriptions into XML, so that they can be exchanged beween editors, repositories, and other tools. The following topic is how USDL can be made available through the Semantic Web as a network of linked data, connected via URIs. Finally, consideration is given to human-readable representations of USDL descriptions, and how they can be generated, in large part, from the contents of a stored USDL model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) was one of the first universities in Australia to establish an institutional repository. Launched in November 2003, the repository (QUT ePrints) uses the EPrints open source repository software (from Southampton) and has enjoyed the benefit of an institutional deposit mandate since January 2004. Currently (April 2012), the repository holds over 36,000 records, including 17,909 open access publications with another 2,434 publications embargoed but with mediated access enabled via the ‘Request a copy’ button which is a feature of the EPrints software. At QUT, the repository is managed by the library.QUT ePrints (http://eprints.qut.edu.au) The repository is embedded into a number of other systems at QUT including the staff profile system and the University’s research information system. It has also been integrated into a number of critical processes related to Government reporting and research assessment. Internally, senior research administrators often look to the repository for information to assist with decision-making and planning. While some statistics could be drawn from the advanced search feature and the existing download statistics feature, they were rarely at the level of granularity or aggregation required. Getting the information from the ‘back end’ of the repository was very time-consuming for the Library staff. In 2011, the Library funded a project to enhance the range of statistics which would be available from the public interface of QUT ePrints. The repository team conducted a series of focus groups and individual interviews to identify and prioritise functionality requirements for a new statistics ‘dashboard’. The participants included a mix research administrators, early career researchers and senior researchers. The repository team identified a number of business criteria (eg extensible, support available, skills required etc) and then gave each a weighting. After considering all the known options available, five software packages (IRStats, ePrintsStats, AWStats, BIRT and Google Urchin/Analytics) were thoroughly evaluated against a list of 69 criteria to determine which would be most suitable. The evaluation revealed that IRStats was the best fit for our requirements. It was deemed capable of meeting 21 out of the 31 high priority criteria. Consequently, IRStats was implemented as the basis for QUT ePrints’ new statistics dashboards which were launched in Open Access Week, October 2011. Statistics dashboards are now available at four levels; whole-of-repository level, organisational unit level, individual author level and individual item level. The data available includes, cumulative total deposits, time series deposits, deposits by item type, % fulltexts, % open access, cumulative downloads, time series downloads, downloads by item type, author ranking, paper ranking (by downloads), downloader geographic location, domains, internal v external downloads, citation data (from Scopus and Web of Science), most popular search terms, non-search referring websites. The data is displayed in charts, maps and table format. The new statistics dashboards are a great success. Feedback received from staff and students has been very positive. Individual researchers have said that they have found the information to be very useful when compiling a track record. It is now very easy for senior administrators (including the Deputy Vice Chancellor-Research) to compare the full-text deposit rates (i.e. mandate compliance rates) across organisational units. This has led to increased ‘encouragement’ from Heads of School and Deans in relation to the provision of full-text versions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The cross-sections of the Social Web and the Semantic Web has put folksonomy in the spot light for its potential in overcoming knowledge acquisition bottleneck and providing insight for "wisdom of the crowds". Folksonomy which comes as the results of collaborative tagging activities has provided insight into user's understanding about Web resources which might be useful for searching and organizing purposes. However, collaborative tagging vocabulary poses some challenges since tags are freely chosen by users and may exhibit synonymy and polysemy problem. In order to overcome these challenges and boost the potential of folksonomy as emergence semantics we propose to consolidate the diverse vocabulary into a consolidated entities and concepts. We propose to extract a tag ontology by ontology learning process to represent the semantics of a tagging community. This paper presents a novel approach to learn the ontology based on the widely used lexical database WordNet. We present personalization strategies to disambiguate the semantics of tags by combining the opinion of WordNet lexicographers and users’ tagging behavior together. We provide empirical evaluations by using the semantic information contained in the ontology in a tag recommendation experiment. The results show that by using the semantic relationships on the ontology the accuracy of the tag recommender has been improved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a lightweight biometric solution for user authentication over networks using online handwritten signatures. The algorithm proposed is based on a modified Hausdorff distance and has favorable characteristics such as low computational cost and minimal training requirements. Furthermore, we investigate an information theoretic model for capacity and performance analysis for biometric authentication which brings additional theoretical insights to the problem. A fully functional proof-of-concept prototype that relies on commonly available off-the-shelf hardware is developed as a client-server system that supports Web services. Initial experimental results show that the algorithm performs well despite its low computational requirements and is resilient against over-the-shoulder attacks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents the findings from the first phase of a larger study into the information literacy of website designers. Using a phenomenographic approach, it maps the variation in experiencing the phenomenon of information literacy from the viewpoint of website designers. The current result reveals important insights into the lived experience of this group of professionals. Analysis of data has identified five different ways in which website designers experience information literacy: problem-solving, using best practices, using a knowledge base, building a successful website, and being part of a learning community of practice. As there is presently relatively little research in the area of workplace information literacy, this study provides important additional insights into our understanding of information literacy in the workplace, especially in the specific context of website design. Such understandings are of value to library and information professionals working with web professionals either within or beyond libraries. These understandings may also enable information professionals to take a more proactive role in the industry of website design. Finally, the obtained knowledge will contribute to the education of both website-design science and library and information science (LIS) students.