758 resultados para cloud computing.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Information overload has become a serious issue for web users. Personalisation can provide effective solutions to overcome this problem. Recommender systems are one popular personalisation tool to help users deal with this issue. As the base of personalisation, the accuracy and efficiency of web user profiling affects the performances of recommender systems and other personalisation systems greatly. In Web 2.0, the emerging user information provides new possible solutions to profile users. Folksonomy or tag information is a kind of typical Web 2.0 information. Folksonomy implies the users‘ topic interests and opinion information. It becomes another source of important user information to profile users and to make recommendations. However, since tags are arbitrary words given by users, folksonomy contains a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise makes it difficult to profile users accurately or to make quality recommendations. This thesis investigates the distinctive features and multiple relationships of folksonomy and explores novel approaches to solve the tag quality problem and profile users accurately. Harvesting the wisdom of crowds and experts, three new user profiling approaches are proposed: folksonomy based user profiling approach, taxonomy based user profiling approach, hybrid user profiling approach based on folksonomy and taxonomy. The proposed user profiling approaches are applied to recommender systems to improve their performances. Based on the generated user profiles, the user and item based collaborative filtering approaches, combined with the content filtering methods, are proposed to make recommendations. The proposed new user profiling and recommendation approaches have been evaluated through extensive experiments. The effectiveness evaluation experiments were conducted on two real world datasets collected from Amazon.com and CiteULike websites. The experimental results demonstrate that the proposed user profiling and recommendation approaches outperform those related state-of-the-art approaches. In addition, this thesis proposes a parallel, scalable user profiling implementation approach based on advanced cloud computing techniques such as Hadoop, MapReduce and Cascading. The scalability evaluation experiments were conducted on a large scaled dataset collected from Del.icio.us website. This thesis contributes to effectively use the wisdom of crowds and expert to help users solve information overload issues through providing more accurate, effective and efficient user profiling and recommendation approaches. It also contributes to better usages of taxonomy information given by experts and folksonomy information contributed by users in Web 2.0.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Large scaled emerging user created information in web 2.0 such as tags, reviews, comments and blogs can be used to profile users’ interests and preferences to make personalized recommendations. To solve the scalability problem of the current user profiling and recommender systems, this paper proposes a parallel user profiling approach and a scalable recommender system. The current advanced cloud computing techniques including Hadoop, MapReduce and Cascading are employed to implement the proposed approaches. The experiments were conducted on Amazon EC2 Elastic MapReduce and S3 with a real world large scaled dataset from Del.icio.us website.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent changes in IT organisations have resulted in changes to library IT support. Concurrently, new tools and systems for service delivery, have become available, but these require a move away from the traditional ICT model. Many libraries are investigating new models, including Software as a Service (SaaS), cloud computing and open source software. This paper considers whether the adoption of these tools and environments by libraries has occurred as a result of a lack of suitable ICT solutions and support ICT organisations. It also considers what skills library staff need in order to ensure sustainability, supportability, and ultimately, success.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

EMR (Electronic Medical Record) is an emerging technology that is highly-blended between non-IT and IT area. One methodology is to link the non-IT and IT area is to construct databases. Nowadays, it supports before and after-treatment for patients and should satisfy all stakeholders such as practitioners, nurses, researchers, administrators and financial departments and so on. In accordance with the database maintenance, DAS (Data as Service) model is one solution for outsourcing. However, there are some scalability and strategy issues when we need to plan to use DAS model properly. We constructed three kinds of databases such as plan-text, MS built-in encryption which is in-house model and custom AES (Advanced Encryption Standard) - DAS model scaling from 5K to 2560K records. To perform custom AES-DAS better, we also devised Bucket Index using Bloom Filter. The simulation showed the response times arithmetically increased in the beginning but after a certain threshold, exponentially increased in the end. In conclusion, if the database model is close to in-house model, then vendor technology is a good way to perform and get query response times in a consistent manner. If the model is DAS model, it is easy to outsource the database, however, some techniques like Bucket Index enhances its utilization. To get faster query response times, designing database such as consideration of the field type is also important. This study suggests cloud computing would be a next DAS model to satisfy the scalability and the security issues.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Electronic services are a leitmotif in ‘hot’ topics like Software as a Service, Service Oriented Architecture (SOA), Service oriented Computing, Cloud Computing, application markets and smart devices. We propose to consider these in what has been termed the Service Ecosystem (SES). The SES encompasses all levels of electronic services and their interaction, with human consumption and initiation on its periphery in much the same way the ‘Web’ describes a plethora of technologies that eventuate to connect information and expose it to humans. Presently, the SES is heterogeneous, fragmented and confined to semi-closed systems. A key issue hampering the emergence of an integrated SES is Service Discovery (SD). A SES will be dynamic with areas of structured and unstructured information within which service providers and ‘lay’ human consumers interact; until now the two are disjointed, e.g., SOA-enabled organisations, industries and domains are choreographed by domain experts or ‘hard-wired’ to smart device application markets and web applications. In a SES, services are accessible, comparable and exchangeable to human consumers closing the gap to the providers. This requires a new SD with which humans can discover services transparently and effectively without special knowledge or training. We propose two modes of discovery, directed search following an agenda and explorative search, which speculatively expands knowledge of an area of interest by means of categories. Inspired by conceptual space theory from cognitive science, we propose to implement the modes of discovery using concepts to map a lay consumer’s service need to terminologically sophisticated descriptions of services. To this end, we reframe SD as an information retrieval task on the information attached to services, such as, descriptions, reviews, documentation and web sites - the Service Information Shadow. The Semantic Space model transforms the shadow's unstructured semantic information into a geometric, concept-like representation. We introduce an improved and extended Semantic Space including categorization calling it the Semantic Service Discovery model. We evaluate our model with a highly relevant, service related corpus simulating a Service Information Shadow including manually constructed complex service agendas, as well as manual groupings of services. We compare our model against state-of-the-art information retrieval systems and clustering algorithms. By means of an extensive series of empirical evaluations, we establish optimal parameter settings for the semantic space model. The evaluations demonstrate the model’s effectiveness for SD in terms of retrieval precision over state-of-the-art information retrieval models (directed search) and the meaningful, automatic categorization of service related information, which shows potential to form the basis of a useful, cognitively motivated map of the SES for exploratory search.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Through the rise of cloud computing, on-demand applications, and business networks, services are increasingly being exposed and delivered on the Internet and through mobile communications. So far, services have mainly been described through technical interface descriptions. The description of business details, such as pricing, service-level, or licensing, has been neglected and is therefore hard to automatically process by service consumers. Also, third-party intermediaries, such as brokers, cloud providers, or channel partners, are interested in the business details in order to extend services and their delivery and, thus, further monetize services. In this paper, the constructivist design of the UnifiedServiceDescriptionLanguage (USDL), aimed at describing services across the human-to-automation continuum, is presented. The proposal of USDL follows well-defined requirements which are expressed against a common service discourse and synthesized from currently available servicedescription efforts. USDL's concepts and modules are evaluated for their support of the different requirements and use cases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Submission to the Australian Government Attorney General’s Department consultation paper on Revising the Scope of the Copyright ‘Safe Harbour Scheme’

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study analyses organisational knowledge integration processes from a multi-level and systemic perspective, with particular reference to the case of Fujitsu. A conceptual framework for knowledge integration is suggested focusing on team-building capability, capturing and utilising individual tacit knowledge, and communication networks for integrating dispersed specialist knowledge required in the development of new products and services. The research highlights that knowledge integration occurring in the innovation process is a result of knowledge exposure, its distribution and embodiment and finally its transfer, which leads to innovation capability and competitive advantage in firm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Improving energy efficiency has become increasingly important in data centers in recent years to reduce the rapidly growing tremendous amounts of electricity consumption. The power dissipation of the physical servers is the root cause of power usage of other systems, such as cooling systems. Many efforts have been made to make data centers more energy efficient. One of them is to minimize the total power consumption of these servers in a data center through virtual machine consolidation, which is implemented by virtual machine placement. The placement problem is often modeled as a bin packing problem. Due to the NP-hard nature of the problem, heuristic solutions such as First Fit and Best Fit algorithms have been often used and have generally good results. However, their performance leaves room for further improvement. In this paper we propose a Simulated Annealing based algorithm, which aims at further improvement from any feasible placement. This is the first published attempt of using SA to solve the VM placement problem to optimize the power consumption. Experimental results show that this SA algorithm can generate better results, saving up to 25 percentage more energy than First Fit Decreasing in an acceptable time frame.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Flexible information exchange is critical to successful design-analysis integration, but current top-down, standards-based and model-oriented strategies impose restrictions that contradicts this flexibility. In this article we present a bottom-up, user-controlled and process-oriented approach to linking design and analysis applications that is more responsive to the varied needs of designers and design teams. Drawing on research into scientific workflows, we present a framework for integration that capitalises on advances in cloud computing to connect discrete tools via flexible and distributed process networks. We then discuss how a shared mapping process that is flexible and user friendly supports non-programmers in creating these custom connections. Adopting a services-oriented system architecture, we propose a web-based platform that enables data, semantics and models to be shared on the fly. We then discuss potential challenges and opportunities for its development as a flexible, visual, collaborative, scalable and open system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Flexible information exchange is critical to successful design integration, but current top-down, standards-based and model-oriented strategies impose restrictions that are contradictory to this flexibility. In this paper we present a bottom-up, user-controlled and process-oriented approach to linking design and analysis applications that is more responsive to the varied needs of designers and design teams. Drawing on research into scientific workflows, we present a framework for integration that capitalises on advances in cloud computing to connect discrete tools via flexible and distributed process networks. Adopting a services-oriented system architecture, we propose a web-based platform that enables data, semantics and models to be shared on the fly. We discuss potential challenges and opportunities for the development thereof as a flexible, visual, collaborative, scalable and open system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over the last two decades, the internet and e-commerce have reshaped the way we communicate, interact and transact. In the converged environment enabled by high speed broadband, web 2.0, social media, virtual worlds, user-generated content, cloud computing, VoIP, open source software and open content have rapidly become established features of our online experience. Business and government alike are increasingly using the internet as the preferred platform for delivery of their goods and services and for effective engagement with their clients. New ways of doing things online and challenges to existing business, government and social activities have tested current laws and often demand new policies and laws, adapted to the new realities. The focus of this book is the regulation of social, cultural and commercial activity on the World Wide Web. It considers developments in the law that have been, and continue to be, brought about by the emergence of the internet and e-commerce. It analyses how the law is applied to define rights and obligations in relation to online infrastructure, content and practices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Research on Enterprise Resource Planning (ERP) Systems is becoming a well-established research theme in Information Systems (IS) research. Enterprise Resource Planning Systems, given its unique differentiations with other IS applications, have provided an interesting backdrop to test and re-test some of the key and fundamental concepts in IS. While some researchers have tested well-established concepts of technology acceptance, system usage and system success in the context of ERP Systems, others have researched how new paradigms like cloud computing and social media integrate with ERP Systems. Moreover, ERP Systems provided the context for cross disciplinary research such as knowledge management, project management and business process management research. Almost after two-decades since its inception in IS research, this paper provides a critique of 198 papers published on ERP Systems since 2006-2012. We observe patterns on ES research, provide comparisons to past studies and provide future research directions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Firms are moving away from decentralized regional offices. Last year the author spoke with a valuer working on the Sunshine Coast for a Brisbane firm. In years past this valuer would have left home in the morning to go to the office, as well as travelling during the day to client sites. Now they get up, have breakfast, change out of their pyjamas (if they have meetings!) and walk into their employer set-up home office to ‘punch-in’. Apart from travel for essential meetings at head office, or for the purpose of on-site inspections, they can attend work, engage with colleagues and clients and never leave home. While this practice may be a cost saving to the firm and a commuter-friendly way of working, it raises a range of issues to be managed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using Monte Carlo simulation for radiotherapy dose calculation can provide more accurate results when compared to the analytical methods usually found in modern treatment planning systems, especially in regions with a high degree of inhomogeneity. These more accurate results acquired using Monte Carlo simulation however, often require orders of magnitude more calculation time so as to attain high precision, thereby reducing its utility within the clinical environment. This work aims to improve the utility of Monte Carlo simulation within the clinical environment by developing techniques which enable faster Monte Carlo simulation of radiotherapy geometries. This is achieved principally through the use new high performance computing environments and simpler alternative, yet equivalent representations of complex geometries. Firstly the use of cloud computing technology and it application to radiotherapy dose calculation is demonstrated. As with other super-computer like environments, the time to complete a simulation decreases as 1=n with increasing n cloud based computers performing the calculation in parallel. Unlike traditional super computer infrastructure however, there is no initial outlay of cost, only modest ongoing usage fees; the simulations described in the following are performed using this cloud computing technology. The definition of geometry within the chosen Monte Carlo simulation environment - Geometry & Tracking 4 (GEANT4) in this case - is also addressed in this work. At the simulation implementation level, a new computer aided design interface is presented for use with GEANT4 enabling direct coupling between manufactured parts and their equivalent in the simulation environment, which is of particular importance when defining linear accelerator treatment head geometry. Further, a new technique for navigating tessellated or meshed geometries is described, allowing for up to 3 orders of magnitude performance improvement with the use of tetrahedral meshes in place of complex triangular surface meshes. The technique has application in the definition of both mechanical parts in a geometry as well as patient geometry. Static patient CT datasets like those found in typical radiotherapy treatment plans are often very large and present a significant performance penalty on a Monte Carlo simulation. By extracting the regions of interest in a radiotherapy treatment plan, and representing them in a mesh based form similar to those used in computer aided design, the above mentioned optimisation techniques can be used so as to reduce the time required to navigation the patient geometry in the simulation environment. Results presented in this work show that these equivalent yet much simplified patient geometry representations enable significant performance improvements over simulations that consider raw CT datasets alone. Furthermore, this mesh based representation allows for direct manipulation of the geometry enabling motion augmentation for time dependant dose calculation for example. Finally, an experimental dosimetry technique is described which allows the validation of time dependant Monte Carlo simulation, like the ones made possible by the afore mentioned patient geometry definition. A bespoke organic plastic scintillator dose rate meter is embedded in a gel dosimeter thereby enabling simultaneous 3D dose distribution and dose rate measurement. This work demonstrates the effectiveness of applying alternative and equivalent geometry definitions to complex geometries for the purposes of Monte Carlo simulation performance improvement. Additionally, these alternative geometry definitions allow for manipulations to be performed on otherwise static and rigid geometry.