692 resultados para cloud computing services
Resumo:
Têm-se notado nos últimos anos um crescimento na adoção de tecnologias de computação em nuvem, com uma adesão inicial por parte de particulares e pequenas empresas, e mais recentemente por grandes organizações. Esta tecnologia tem servido de base ao aparecimento de um conjunto de novas tendências, como a Internet das Coisas ligando os nossos equipamentos pessoais e wearables às redes sociais, processos de big data que permitem tipificar comportamentos de clientes ou ainda facilitar a vida ao cidadão com serviços de atendimento integrados. No entanto, tal como em todas as novas tendências disruptivas, que trazem consigo um conjunto de oportunidades, trazem também um conjunto de novos riscos que são necessários de serem equacionados. Embora este caminho praticamente se torne inevitável para uma grande parte de empresas e entidades governamentais, a sua adoção como funcionamento deve ser alvo de uma permanente avaliação e monitorização entre as vantagens e riscos associados. Para tal, é fundamental que as organizações se dotem de uma eficiente gestão do risco, de modo que possam tipificar os riscos (identificar, analisar e quantificar) e orientar-se de uma forma segura e metódica para este novo paradigma. Caso não o façam, os riscos ficam evidenciados, desde uma possível perda de competitividade face às suas congéneres, falta de confiança dos clientes, dos parceiros de negócio e podendo culminar numa total inatividade do negócio. Com esta tese de mestrado desenvolve-se uma análise genérica de risco tendo como base a Norma ISO 31000:2009 e a elaboração de uma proposta de registo de risco, que possa servir de auxiliar em processos de tomada de decisão na contratação e manutenção de serviços de Computação em Nuvem por responsáveis de organizações privadas ou estatais.
Resumo:
The diversity in the way cloud providers o↵er their services, give their SLAs, present their QoS, or support di↵erent technologies, makes very difficult the portability and interoperability of cloud applications, and favours the well-known vendor lock-in problem. We propose a model to describe cloud applications and the required resources in an agnostic, and providers- and resources-independent way, in which individual application modules, and entire applications, may be re-deployed using different services without modification. To support this model, and after the proposal of a variety of cross-cloud application management tools by different authors, we propose going one step further in the unification of cloud services with a management approach in which IaaS and PaaS services are integrated into a unified interface. We provide support for deploying applications whose components are distributed on different cloud providers, indistinctly using IaaS and PaaS services.
Resumo:
The current infrastructure as a service (IaaS) cloud systems, allow users to load their own virtual machines. However, most of these systems do not provide users with an automatic mechanism to load a network topology of virtual machines. In order to specify and implement the network topology, we use software switches and routers as network elements. Before running a group of virtual machines, the user needs to set up the system once to specify a network topology of virtual machines. Then, given the user’s request for running a specific topology, our system loads the appropriate virtual machines (VMs) and also runs separated VMs as software switches and routers. Furthermore, we have developed a manager that handles physical hardware failure situations. This system has been designed in order to allow users to use the system without knowing all the internal technical details.
Resumo:
Objectives: To discuss how current research in the area of smart homes and ambient assisted living will be influenced by the use of big data. Methods: A scoping review of literature published in scientific journals and conference proceedings was performed, focusing on smart homes, ambient assisted living and big data over the years 2011-2014. Results: The health and social care market has lagged behind other markets when it comes to the introduction of innovative IT solutions and the market faces a number of challenges as the use of big data will increase. First, there is a need for a sustainable and trustful information chain where the needed information can be transferred from all producers to all consumers in a structured way. Second, there is a need for big data strategies and policies to manage the new situation where information is handled and transferred independently of the place of the expertise. Finally, there is a possibility to develop new and innovative business models for a market that supports cloud computing, social media, crowdsourcing etc. Conclusions: The interdisciplinary area of big data, smart homes and ambient assisted living is no longer only of interest for IT developers, it is also of interest for decision makers as customers make more informed choices among today's services. In the future it will be of importance to make information usable for managers and improve decision making, tailor smart home services based on big data, develop new business models, increase competition and identify policies to ensure privacy, security and liability.
Resumo:
Surgical interventions are usually performed in an operation room; however, access to the information by the medical team members during the intervention is limited. While in conversations with the medical staff, we observed that they attach significant importance to the improvement of the information and communication direct access by queries during the process in real time. It is due to the fact that the procedure is rather slow and there is lack of interaction with the systems in the operation room. These systems can be integrated on the Cloud adding new functionalities to the existing systems the medical expedients are processed. Therefore, such a communication system needs to be built upon the information and interaction access specifically designed and developed to aid the medical specialists. Copyright 2014 ACM.
Resumo:
Information overload has become a serious issue for web users. Personalisation can provide effective solutions to overcome this problem. Recommender systems are one popular personalisation tool to help users deal with this issue. As the base of personalisation, the accuracy and efficiency of web user profiling affects the performances of recommender systems and other personalisation systems greatly. In Web 2.0, the emerging user information provides new possible solutions to profile users. Folksonomy or tag information is a kind of typical Web 2.0 information. Folksonomy implies the users‘ topic interests and opinion information. It becomes another source of important user information to profile users and to make recommendations. However, since tags are arbitrary words given by users, folksonomy contains a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise makes it difficult to profile users accurately or to make quality recommendations. This thesis investigates the distinctive features and multiple relationships of folksonomy and explores novel approaches to solve the tag quality problem and profile users accurately. Harvesting the wisdom of crowds and experts, three new user profiling approaches are proposed: folksonomy based user profiling approach, taxonomy based user profiling approach, hybrid user profiling approach based on folksonomy and taxonomy. The proposed user profiling approaches are applied to recommender systems to improve their performances. Based on the generated user profiles, the user and item based collaborative filtering approaches, combined with the content filtering methods, are proposed to make recommendations. The proposed new user profiling and recommendation approaches have been evaluated through extensive experiments. The effectiveness evaluation experiments were conducted on two real world datasets collected from Amazon.com and CiteULike websites. The experimental results demonstrate that the proposed user profiling and recommendation approaches outperform those related state-of-the-art approaches. In addition, this thesis proposes a parallel, scalable user profiling implementation approach based on advanced cloud computing techniques such as Hadoop, MapReduce and Cascading. The scalability evaluation experiments were conducted on a large scaled dataset collected from Del.icio.us website. This thesis contributes to effectively use the wisdom of crowds and expert to help users solve information overload issues through providing more accurate, effective and efficient user profiling and recommendation approaches. It also contributes to better usages of taxonomy information given by experts and folksonomy information contributed by users in Web 2.0.
Resumo:
The Large scaled emerging user created information in web 2.0 such as tags, reviews, comments and blogs can be used to profile users’ interests and preferences to make personalized recommendations. To solve the scalability problem of the current user profiling and recommender systems, this paper proposes a parallel user profiling approach and a scalable recommender system. The current advanced cloud computing techniques including Hadoop, MapReduce and Cascading are employed to implement the proposed approaches. The experiments were conducted on Amazon EC2 Elastic MapReduce and S3 with a real world large scaled dataset from Del.icio.us website.
Resumo:
Recent changes in IT organisations have resulted in changes to library IT support. Concurrently, new tools and systems for service delivery, have become available, but these require a move away from the traditional ICT model. Many libraries are investigating new models, including Software as a Service (SaaS), cloud computing and open source software. This paper considers whether the adoption of these tools and environments by libraries has occurred as a result of a lack of suitable ICT solutions and support ICT organisations. It also considers what skills library staff need in order to ensure sustainability, supportability, and ultimately, success.
Resumo:
EMR (Electronic Medical Record) is an emerging technology that is highly-blended between non-IT and IT area. One methodology is to link the non-IT and IT area is to construct databases. Nowadays, it supports before and after-treatment for patients and should satisfy all stakeholders such as practitioners, nurses, researchers, administrators and financial departments and so on. In accordance with the database maintenance, DAS (Data as Service) model is one solution for outsourcing. However, there are some scalability and strategy issues when we need to plan to use DAS model properly. We constructed three kinds of databases such as plan-text, MS built-in encryption which is in-house model and custom AES (Advanced Encryption Standard) - DAS model scaling from 5K to 2560K records. To perform custom AES-DAS better, we also devised Bucket Index using Bloom Filter. The simulation showed the response times arithmetically increased in the beginning but after a certain threshold, exponentially increased in the end. In conclusion, if the database model is close to in-house model, then vendor technology is a good way to perform and get query response times in a consistent manner. If the model is DAS model, it is easy to outsource the database, however, some techniques like Bucket Index enhances its utilization. To get faster query response times, designing database such as consideration of the field type is also important. This study suggests cloud computing would be a next DAS model to satisfy the scalability and the security issues.
Resumo:
Submission to the Australian Government Attorney General’s Department consultation paper on Revising the Scope of the Copyright ‘Safe Harbour Scheme’
Resumo:
Improving energy efficiency has become increasingly important in data centers in recent years to reduce the rapidly growing tremendous amounts of electricity consumption. The power dissipation of the physical servers is the root cause of power usage of other systems, such as cooling systems. Many efforts have been made to make data centers more energy efficient. One of them is to minimize the total power consumption of these servers in a data center through virtual machine consolidation, which is implemented by virtual machine placement. The placement problem is often modeled as a bin packing problem. Due to the NP-hard nature of the problem, heuristic solutions such as First Fit and Best Fit algorithms have been often used and have generally good results. However, their performance leaves room for further improvement. In this paper we propose a Simulated Annealing based algorithm, which aims at further improvement from any feasible placement. This is the first published attempt of using SA to solve the VM placement problem to optimize the power consumption. Experimental results show that this SA algorithm can generate better results, saving up to 25 percentage more energy than First Fit Decreasing in an acceptable time frame.
Resumo:
Research on Enterprise Resource Planning (ERP) Systems is becoming a well-established research theme in Information Systems (IS) research. Enterprise Resource Planning Systems, given its unique differentiations with other IS applications, have provided an interesting backdrop to test and re-test some of the key and fundamental concepts in IS. While some researchers have tested well-established concepts of technology acceptance, system usage and system success in the context of ERP Systems, others have researched how new paradigms like cloud computing and social media integrate with ERP Systems. Moreover, ERP Systems provided the context for cross disciplinary research such as knowledge management, project management and business process management research. Almost after two-decades since its inception in IS research, this paper provides a critique of 198 papers published on ERP Systems since 2006-2012. We observe patterns on ES research, provide comparisons to past studies and provide future research directions.
Resumo:
Firms are moving away from decentralized regional offices. Last year the author spoke with a valuer working on the Sunshine Coast for a Brisbane firm. In years past this valuer would have left home in the morning to go to the office, as well as travelling during the day to client sites. Now they get up, have breakfast, change out of their pyjamas (if they have meetings!) and walk into their employer set-up home office to ‘punch-in’. Apart from travel for essential meetings at head office, or for the purpose of on-site inspections, they can attend work, engage with colleagues and clients and never leave home. While this practice may be a cost saving to the firm and a commuter-friendly way of working, it raises a range of issues to be managed.
Resumo:
Using Monte Carlo simulation for radiotherapy dose calculation can provide more accurate results when compared to the analytical methods usually found in modern treatment planning systems, especially in regions with a high degree of inhomogeneity. These more accurate results acquired using Monte Carlo simulation however, often require orders of magnitude more calculation time so as to attain high precision, thereby reducing its utility within the clinical environment. This work aims to improve the utility of Monte Carlo simulation within the clinical environment by developing techniques which enable faster Monte Carlo simulation of radiotherapy geometries. This is achieved principally through the use new high performance computing environments and simpler alternative, yet equivalent representations of complex geometries. Firstly the use of cloud computing technology and it application to radiotherapy dose calculation is demonstrated. As with other super-computer like environments, the time to complete a simulation decreases as 1=n with increasing n cloud based computers performing the calculation in parallel. Unlike traditional super computer infrastructure however, there is no initial outlay of cost, only modest ongoing usage fees; the simulations described in the following are performed using this cloud computing technology. The definition of geometry within the chosen Monte Carlo simulation environment - Geometry & Tracking 4 (GEANT4) in this case - is also addressed in this work. At the simulation implementation level, a new computer aided design interface is presented for use with GEANT4 enabling direct coupling between manufactured parts and their equivalent in the simulation environment, which is of particular importance when defining linear accelerator treatment head geometry. Further, a new technique for navigating tessellated or meshed geometries is described, allowing for up to 3 orders of magnitude performance improvement with the use of tetrahedral meshes in place of complex triangular surface meshes. The technique has application in the definition of both mechanical parts in a geometry as well as patient geometry. Static patient CT datasets like those found in typical radiotherapy treatment plans are often very large and present a significant performance penalty on a Monte Carlo simulation. By extracting the regions of interest in a radiotherapy treatment plan, and representing them in a mesh based form similar to those used in computer aided design, the above mentioned optimisation techniques can be used so as to reduce the time required to navigation the patient geometry in the simulation environment. Results presented in this work show that these equivalent yet much simplified patient geometry representations enable significant performance improvements over simulations that consider raw CT datasets alone. Furthermore, this mesh based representation allows for direct manipulation of the geometry enabling motion augmentation for time dependant dose calculation for example. Finally, an experimental dosimetry technique is described which allows the validation of time dependant Monte Carlo simulation, like the ones made possible by the afore mentioned patient geometry definition. A bespoke organic plastic scintillator dose rate meter is embedded in a gel dosimeter thereby enabling simultaneous 3D dose distribution and dose rate measurement. This work demonstrates the effectiveness of applying alternative and equivalent geometry definitions to complex geometries for the purposes of Monte Carlo simulation performance improvement. Additionally, these alternative geometry definitions allow for manipulations to be performed on otherwise static and rigid geometry.
Resumo:
The current global economic instability and the vulnerability of small island nations are providing the impetus for greater integration between the countries of the South Pacific region. This exercise is critical for their survival in today’s turbulent economic environment. Past efforts of regional integration in the South Pacific have not been very successful. Reasons attributed to this outcome include issues related to damage of sovereignty, and lack of a shared integration infrastructure. Today, the IT resources with collaborative capacities provide the opportunity to develop a shared IT infrastructure to facilitate integration in the South Pacific. In an attempt to develop a model of regional integration with an IT-backed infrastructure, we identify and report on the antecedents of the current stage of regional integration, and the stakeholders’ perceived benefits of an IT resources backed regional integration in the South Pacific. Employing a case study based approach, the study finds that while most stakeholders were positive about the potential of IT-backed regional integration, significant challenges exist that hinder the realisation of this model. The study finds that facilitating IT-backed regional integration requires enabling IT infrastructure, equitable IT development in the region, greater awareness on the potential of the modern IT resources, market liberalisation of the information and telecommunications sector and greater political support for IT initiatives.