137 resultados para SIB Semantic Information Broker OSGI Semantic Web


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis provides a query model suitable for context sensitive access to a wide range of distributed linked datasets which are available to scientists using the Internet. The model is designed based on scientific research standards which require scientists to provide replicable methods in their publications. Although there are query models available that provide limited replicability, they do not contextualise the process whereby different scientists select dataset locations based on their trust and physical location. In different contexts, scientists need to perform different data cleaning actions, independent of the overall query, and the model was designed to accommodate this function. The query model was implemented as a prototype web application and its features were verified through its use as the engine behind a major scientific data access site, Bio2RDF.org. The prototype showed that it was possible to have context sensitive behaviour for each of the three mirrors of Bio2RDF.org using a single set of configuration settings. The prototype provided executable query provenance that could be attached to scientific publications to fulfil replicability requirements. The model was designed to make it simple to independently interpret and execute the query provenance documents using context specific profiles, without modifying the original provenance documents. Experiments using the prototype as the data access tool in workflow management systems confirmed that the design of the model made it possible to replicate results in different contexts with minimal additions, and no deletions, to query provenance documents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As boundaries between physical and online learning spaces become increasingly blurred in higher education, how can students gain full benefit of Web 2.0 social media and mobile technologies for learning? How can we, as information professionals and educators, best support the information literacy learning needs of students who are universally mobile and Google-focused? This chapter presents informed learning (Bruce, 2008) as a pedagogical construct with potential to support learning across the higher education curriculum, for Web 2.0 and beyond. After outlining the principles of informed learning and how they may enrich the higher education curriculum, we explain the role of library and information professionals in promoting informed learning for Web 2.0 and beyond. Then, by way of illustration, we describe recent experience at an American university where librarians simultaneously learned about and applied informed learning principles in reshaping the information literacy program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the findings from the first phase of a larger study into the information literacy of website designers. Using a phenomenographic approach, it maps the variation in experiencing the phenomenon of information literacy from the viewpoint of website designers. The current result reveals important insights into the lived experience of this group of professionals. Analysis of data has identified five different ways in which website designers experience information literacy: problem-solving, using best practices, using a knowledge base, building a successful website, and being part of a learning community of practice. As there is presently relatively little research in the area of workplace information literacy, this study provides important additional insights into our understanding of information literacy in the workplace, especially in the specific context of website design. Such understandings are of value to library and information professionals working with web professionals either within or beyond libraries. These understandings may also enable information professionals to take a more proactive role in the industry of website design. Finally, the obtained knowledge will contribute to the education of both website-design science and library and information science (LIS) students.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Social media tools are starting to become mainstream and those working in the software development industry are often ahead of the game in terms of using current technological innovations to improve their work. With the advent of outsourcing and distributed teams the software industry is ideally placed to take advantage of social media technologies, tools and environments. This paper looks at how social media is being used by early adopters within the software development industry. Current tools and trends in social media tool use are described and critiqued: what works and what doesn't. We use industrial case studies from platform development, commercial application development and government contexts which provide a clear picture of the emergent state of the art. These real world experiences are then used to show how working collaboratively in geographically dispersed teams, enabled by social media, can enhance and improve the development experience.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently we are facing an overburdening growth of the number of reliable information sources on the Internet. The quantity of information available to everyone via Internet is dramatically growing each year [15]. At the same time, temporal and cognitive resources of human users are not changing, therefore causing a phenomenon of information overload. World Wide Web is one of the main sources of information for decision makers (reference to my research). However our studies show that, at least in Poland, the decision makers see some important problems when turning to Internet as a source of decision information. One of the most common obstacles raised is distribution of relevant information among many sources, and therefore need to visit different Web sources in order to collect all important content and analyze it. A few research groups have recently turned to the problem of information extraction from the Web [13]. The most effort so far has been directed toward collecting data from dispersed databases accessible via web pages (related to as data extraction or information extraction from the Web) and towards understanding natural language texts by means of fact, entity, and association recognition (related to as information extraction). Data extraction efforts show some interesting results, however proper integration of web databases is still beyond us. Information extraction field has been recently very successful in retrieving information from natural language texts, however it is still lacking abilities to understand more complex information, requiring use of common sense knowledge, discourse analysis and disambiguation techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present paper suggests articulating the general context of workplace in information literacy research. The paper considers distinguishing between information literacy research in workplaces and professions. Referring to the results of a phenomenographic enquiry into web professionals’ information literacy as an example, it is indicated that work-related information literacy in particular contexts and depending on the nature of the context, is experienced beyond physical workspaces and at professional level. This involves people interacting with each other and with information at a broader level in comparison to a physically bounded workspace. Regarding the example case discussed in the paper, virtuality is identified as the dominant feature of the profession that causes information literacy to be experienced at a professional level. It is anticipated that pursuing the direction proposed in the paper will result in a more segmented image of work-related information literacy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the current economy, knowledge has been recognized to be a valuable organisational asset, a crucial factor that aids organisations to succeed in highly competitive environments. Many organisations have begun projects and special initiatives aimed at fostering better knowledge sharing amongst their employees. Not surprisingly, information technology (IT) has been a central element of many of these projects and initiatives, as the potential of emerging information technologies such as Web 2.0 for enabling the process of managing organisational knowledge is recognised. This technology could be used as a collaborative system for knowledge management (KM) within enterprises. Enterprise 2.0 is the application of Web 2.0 in an organisational context. Enterprise 2.0 technologies are web-based social software that facilitate collaboration, communication and information flow in a bidirectional manner: an essential aspect of organisational knowledge management. This chapter explains how Enterprise 2.0 technologies (Web 2.0 technologies within organisations) can support knowledge management. The chapter also explores how such technologies support the codifying (technology-centred) and social network (people-centred) approaches of KM, towards bridging the current gap between these two approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article combines information from fathers' rights Web sites with demographic, historical, and other information to provide an empirically based analysis of fathers' rights advocacy in the United States. Content analysis discerns three factors that are central to the groups' rhetoric: representing domestic violence allegations as false, promoting presumptive joint custody and decreasing child support, and portraying women as perpetrators of domestic abuse. Fathers' rights organizations and themes are examined in relation to state-level demographics and custody policy. The implications of fathers' rights activism for battered women and their children are explored.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The co-creation of cultural artefacts has been democratised given the recent technological affordances of information and communication technologies. Web 2.0 technologies have enabled greater possibilities of citizen inclusion within the media conversations of their nations. For example, the Australian audience has more opportunities to collaboratively produce and tell their story to a broader audience via the public service media (PSM) facilitated platforms of the Australian Broadcasting Corporation (ABC). However, providing open collaborative production for the audience gives rise to the problem, how might the PSM manage the interests of all the stakeholders and align those interests with its legislated Charter? This paper considers this problem through the ABC’s user-created content participatory platform, ABC Pool and highlights the cultural intermediary as the role responsible for managing these tensions. This paper also suggests cultural intermediation is a useful framework for other media organisations engaging in co-creative activities with their audiences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The co-creation of cultural artefacts has been democratised given the recent technological affordances of information and communication technologies. Web 2.0 technologies have enabled greater possibilities of citizen inclusion within the media conversations of their nations. For example, the Australian audience has more opportunities to collaboratively produce and tell their story to a broader audience via the public service media (PSM) facilitated platforms of the Australian Broadcasting Corporation (ABC). However, providing open collaborative production for the audience gives rise to the problem, how might the PSM manage the interests of all the stakeholders and align those interests with its legislated Charter? This paper considers this problem through the ABC’s user-created content participatory platform, ABC Pool and highlights the cultural intermediary as the role responsible for managing these tensions. This paper also suggests cultural intermediation is a useful framework for other media organisations engaging in co-creative activities with their audiences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

For more than a decade research in the field of context aware computing has aimed to find ways to exploit situational information that can be detected by mobile computing and sensor technologies. The goal is to provide people with new and improved applications, enhanced functionality and better use experience (Dey, 2001). Early applications focused on representing or computing on physical parameters, such as showing your location and the location of people or things around you. Such applications might show where the next bus is, which of your friends is in the vicinity and so on. With the advent of social networking software and microblogging sites such as Facebook and Twitter, recommender systems and so on context-aware computing is moving towards mining the social web in order to provide better representations and understanding of context, including social context. In this paper we begin by recapping different theoretical framings of context. We then discuss the problem of context- aware computing from a design perspective.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Information overload has become a serious issue for web users. Personalisation can provide effective solutions to overcome this problem. Recommender systems are one popular personalisation tool to help users deal with this issue. As the base of personalisation, the accuracy and efficiency of web user profiling affects the performances of recommender systems and other personalisation systems greatly. In Web 2.0, the emerging user information provides new possible solutions to profile users. Folksonomy or tag information is a kind of typical Web 2.0 information. Folksonomy implies the users‘ topic interests and opinion information. It becomes another source of important user information to profile users and to make recommendations. However, since tags are arbitrary words given by users, folksonomy contains a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise makes it difficult to profile users accurately or to make quality recommendations. This thesis investigates the distinctive features and multiple relationships of folksonomy and explores novel approaches to solve the tag quality problem and profile users accurately. Harvesting the wisdom of crowds and experts, three new user profiling approaches are proposed: folksonomy based user profiling approach, taxonomy based user profiling approach, hybrid user profiling approach based on folksonomy and taxonomy. The proposed user profiling approaches are applied to recommender systems to improve their performances. Based on the generated user profiles, the user and item based collaborative filtering approaches, combined with the content filtering methods, are proposed to make recommendations. The proposed new user profiling and recommendation approaches have been evaluated through extensive experiments. The effectiveness evaluation experiments were conducted on two real world datasets collected from Amazon.com and CiteULike websites. The experimental results demonstrate that the proposed user profiling and recommendation approaches outperform those related state-of-the-art approaches. In addition, this thesis proposes a parallel, scalable user profiling implementation approach based on advanced cloud computing techniques such as Hadoop, MapReduce and Cascading. The scalability evaluation experiments were conducted on a large scaled dataset collected from Del.icio.us website. This thesis contributes to effectively use the wisdom of crowds and expert to help users solve information overload issues through providing more accurate, effective and efficient user profiling and recommendation approaches. It also contributes to better usages of taxonomy information given by experts and folksonomy information contributed by users in Web 2.0.